00:00:00.001 Started by upstream project "autotest-per-patch" build number 132318 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.062 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.063 The recommended git tool is: git 00:00:00.063 using credential 00000000-0000-0000-0000-000000000002 00:00:00.065 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.104 Fetching changes from the remote Git repository 00:00:00.106 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.170 Using shallow fetch with depth 1 00:00:00.170 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.170 > git --version # timeout=10 00:00:00.235 > git --version # 'git version 2.39.2' 00:00:00.235 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.284 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.284 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.838 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.850 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.862 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.862 > git config core.sparsecheckout # timeout=10 00:00:03.873 > git read-tree -mu HEAD # timeout=10 00:00:03.890 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.915 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.915 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.032 [Pipeline] Start of Pipeline 00:00:04.046 [Pipeline] library 00:00:04.048 Loading library shm_lib@master 00:00:04.048 Library shm_lib@master is cached. Copying from home. 00:00:04.066 [Pipeline] node 00:00:04.088 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.090 [Pipeline] { 00:00:04.101 [Pipeline] catchError 00:00:04.102 [Pipeline] { 00:00:04.116 [Pipeline] wrap 00:00:04.126 [Pipeline] { 00:00:04.134 [Pipeline] stage 00:00:04.136 [Pipeline] { (Prologue) 00:00:04.336 [Pipeline] sh 00:00:04.626 + logger -p user.info -t JENKINS-CI 00:00:04.643 [Pipeline] echo 00:00:04.644 Node: CYP9 00:00:04.650 [Pipeline] sh 00:00:04.951 [Pipeline] setCustomBuildProperty 00:00:04.960 [Pipeline] echo 00:00:04.961 Cleanup processes 00:00:04.966 [Pipeline] sh 00:00:05.255 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.255 692280 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.266 [Pipeline] sh 00:00:05.551 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.551 ++ grep -v 'sudo pgrep' 00:00:05.551 ++ awk '{print $1}' 00:00:05.551 + sudo kill -9 00:00:05.551 + true 00:00:05.566 [Pipeline] cleanWs 00:00:05.577 [WS-CLEANUP] Deleting project workspace... 00:00:05.577 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.590 [WS-CLEANUP] done 00:00:05.594 [Pipeline] setCustomBuildProperty 00:00:05.608 [Pipeline] sh 00:00:05.895 + sudo git config --global --replace-all safe.directory '*' 00:00:05.971 [Pipeline] httpRequest 00:00:07.586 [Pipeline] echo 00:00:07.588 Sorcerer 10.211.164.20 is alive 00:00:07.597 [Pipeline] retry 00:00:07.599 [Pipeline] { 00:00:07.610 [Pipeline] httpRequest 00:00:07.613 HttpMethod: GET 00:00:07.614 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.615 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.629 Response Code: HTTP/1.1 200 OK 00:00:07.630 Success: Status code 200 is in the accepted range: 200,404 00:00:07.630 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.397 [Pipeline] } 00:00:17.407 [Pipeline] // retry 00:00:17.412 [Pipeline] sh 00:00:17.697 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.711 [Pipeline] httpRequest 00:00:18.040 [Pipeline] echo 00:00:18.041 Sorcerer 10.211.164.20 is alive 00:00:18.049 [Pipeline] retry 00:00:18.051 [Pipeline] { 00:00:18.061 [Pipeline] httpRequest 00:00:18.065 HttpMethod: GET 00:00:18.066 URL: http://10.211.164.20/packages/spdk_03b7aa9c74374ff9c19ddc0e7c6c0385dfbc43d0.tar.gz 00:00:18.068 Sending request to url: http://10.211.164.20/packages/spdk_03b7aa9c74374ff9c19ddc0e7c6c0385dfbc43d0.tar.gz 00:00:18.090 Response Code: HTTP/1.1 200 OK 00:00:18.091 Success: Status code 200 is in the accepted range: 200,404 00:00:18.091 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_03b7aa9c74374ff9c19ddc0e7c6c0385dfbc43d0.tar.gz 00:04:11.004 [Pipeline] } 00:04:11.022 [Pipeline] // retry 00:04:11.032 [Pipeline] sh 00:04:11.329 + tar --no-same-owner -xf spdk_03b7aa9c74374ff9c19ddc0e7c6c0385dfbc43d0.tar.gz 00:04:14.645 [Pipeline] sh 00:04:14.934 + git -C spdk log --oneline -n5 00:04:14.934 03b7aa9c7 bdev/nvme: Move the spdk_bdev_nvme_opts and spdk_bdev_timeout_action struct to the public header. 00:04:14.934 d47eb51c9 bdev: fix a race between reset start and complete 00:04:14.934 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:04:14.934 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:04:14.934 4bcab9fb9 correct kick for CQ full case 00:04:14.947 [Pipeline] } 00:04:14.961 [Pipeline] // stage 00:04:14.970 [Pipeline] stage 00:04:14.972 [Pipeline] { (Prepare) 00:04:14.990 [Pipeline] writeFile 00:04:15.007 [Pipeline] sh 00:04:15.297 + logger -p user.info -t JENKINS-CI 00:04:15.311 [Pipeline] sh 00:04:15.601 + logger -p user.info -t JENKINS-CI 00:04:15.615 [Pipeline] sh 00:04:15.905 + cat autorun-spdk.conf 00:04:15.905 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:15.905 SPDK_TEST_NVMF=1 00:04:15.905 SPDK_TEST_NVME_CLI=1 00:04:15.905 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:15.905 SPDK_TEST_NVMF_NICS=e810 00:04:15.905 SPDK_TEST_VFIOUSER=1 00:04:15.905 SPDK_RUN_UBSAN=1 00:04:15.905 NET_TYPE=phy 00:04:15.913 RUN_NIGHTLY=0 00:04:15.918 [Pipeline] readFile 00:04:15.948 [Pipeline] withEnv 00:04:15.951 [Pipeline] { 00:04:15.966 [Pipeline] sh 00:04:16.262 + set -ex 00:04:16.262 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:04:16.262 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:16.262 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:16.262 ++ SPDK_TEST_NVMF=1 00:04:16.262 ++ SPDK_TEST_NVME_CLI=1 00:04:16.262 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:16.262 ++ SPDK_TEST_NVMF_NICS=e810 00:04:16.262 ++ SPDK_TEST_VFIOUSER=1 00:04:16.262 ++ SPDK_RUN_UBSAN=1 00:04:16.262 ++ NET_TYPE=phy 00:04:16.262 ++ RUN_NIGHTLY=0 00:04:16.262 + case $SPDK_TEST_NVMF_NICS in 00:04:16.262 + DRIVERS=ice 00:04:16.262 + [[ tcp == \r\d\m\a ]] 00:04:16.262 + [[ -n ice ]] 00:04:16.262 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:04:16.262 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:04:16.262 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:04:16.262 rmmod: ERROR: Module irdma is not currently loaded 00:04:16.262 rmmod: ERROR: Module i40iw is not currently loaded 00:04:16.262 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:04:16.262 + true 00:04:16.262 + for D in $DRIVERS 00:04:16.262 + sudo modprobe ice 00:04:16.262 + exit 0 00:04:16.273 [Pipeline] } 00:04:16.292 [Pipeline] // withEnv 00:04:16.299 [Pipeline] } 00:04:16.315 [Pipeline] // stage 00:04:16.325 [Pipeline] catchError 00:04:16.327 [Pipeline] { 00:04:16.345 [Pipeline] timeout 00:04:16.345 Timeout set to expire in 1 hr 0 min 00:04:16.347 [Pipeline] { 00:04:16.362 [Pipeline] stage 00:04:16.364 [Pipeline] { (Tests) 00:04:16.380 [Pipeline] sh 00:04:16.672 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:16.672 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:16.672 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:16.672 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:04:16.672 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:16.672 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:16.672 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:04:16.672 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:16.672 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:16.672 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:16.672 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:04:16.672 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:16.672 + source /etc/os-release 00:04:16.672 ++ NAME='Fedora Linux' 00:04:16.672 ++ VERSION='39 (Cloud Edition)' 00:04:16.672 ++ ID=fedora 00:04:16.672 ++ VERSION_ID=39 00:04:16.672 ++ VERSION_CODENAME= 00:04:16.672 ++ PLATFORM_ID=platform:f39 00:04:16.672 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:16.672 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:16.672 ++ LOGO=fedora-logo-icon 00:04:16.672 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:16.672 ++ HOME_URL=https://fedoraproject.org/ 00:04:16.672 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:16.672 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:16.672 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:16.672 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:16.672 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:16.672 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:16.672 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:16.672 ++ SUPPORT_END=2024-11-12 00:04:16.672 ++ VARIANT='Cloud Edition' 00:04:16.672 ++ VARIANT_ID=cloud 00:04:16.672 + uname -a 00:04:16.672 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:16.672 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:19.975 Hugepages 00:04:19.975 node hugesize free / total 00:04:19.975 node0 1048576kB 0 / 0 00:04:19.975 node0 2048kB 0 / 0 00:04:19.975 node1 1048576kB 0 / 0 00:04:19.975 node1 2048kB 0 / 0 00:04:19.975 00:04:19.975 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:19.975 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:19.975 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:19.975 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:19.975 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:19.975 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:19.975 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:19.975 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:19.975 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:19.975 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:19.975 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:19.975 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:19.975 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:19.975 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:19.975 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:19.975 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:19.975 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:19.975 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:19.975 + rm -f /tmp/spdk-ld-path 00:04:19.975 + source autorun-spdk.conf 00:04:19.975 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:19.975 ++ SPDK_TEST_NVMF=1 00:04:19.975 ++ SPDK_TEST_NVME_CLI=1 00:04:19.975 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:19.975 ++ SPDK_TEST_NVMF_NICS=e810 00:04:19.975 ++ SPDK_TEST_VFIOUSER=1 00:04:19.975 ++ SPDK_RUN_UBSAN=1 00:04:19.975 ++ NET_TYPE=phy 00:04:19.975 ++ RUN_NIGHTLY=0 00:04:19.975 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:19.975 + [[ -n '' ]] 00:04:19.976 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:19.976 + for M in /var/spdk/build-*-manifest.txt 00:04:19.976 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:19.976 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:19.976 + for M in /var/spdk/build-*-manifest.txt 00:04:19.976 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:19.976 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:19.976 + for M in /var/spdk/build-*-manifest.txt 00:04:19.976 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:19.976 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:19.976 ++ uname 00:04:19.976 + [[ Linux == \L\i\n\u\x ]] 00:04:19.976 + sudo dmesg -T 00:04:19.976 + sudo dmesg --clear 00:04:19.976 + dmesg_pid=693827 00:04:19.976 + [[ Fedora Linux == FreeBSD ]] 00:04:19.976 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:19.976 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:19.976 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:19.976 + [[ -x /usr/src/fio-static/fio ]] 00:04:19.976 + export FIO_BIN=/usr/src/fio-static/fio 00:04:19.976 + FIO_BIN=/usr/src/fio-static/fio 00:04:19.976 + sudo dmesg -Tw 00:04:19.976 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:19.976 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:19.976 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:19.976 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:19.976 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:19.976 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:19.976 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:19.976 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:19.976 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:20.262 10:31:59 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:20.262 10:31:59 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:20.262 10:31:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:20.262 10:31:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:04:20.262 10:31:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:04:20.262 10:31:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:20.262 10:31:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:04:20.262 10:31:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:04:20.262 10:31:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:04:20.262 10:31:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:04:20.262 10:31:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:04:20.262 10:31:59 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:20.262 10:31:59 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:20.262 10:31:59 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:20.262 10:31:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:20.262 10:31:59 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:20.262 10:31:59 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:20.262 10:31:59 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:20.262 10:31:59 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:20.262 10:31:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.262 10:31:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.262 10:31:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.262 10:31:59 -- paths/export.sh@5 -- $ export PATH 00:04:20.262 10:31:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.262 10:31:59 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:20.262 10:31:59 -- common/autobuild_common.sh@486 -- $ date +%s 00:04:20.262 10:31:59 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732008719.XXXXXX 00:04:20.262 10:31:59 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732008719.GQPyEB 00:04:20.262 10:31:59 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:04:20.262 10:31:59 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:04:20.262 10:31:59 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:04:20.262 10:31:59 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:04:20.262 10:31:59 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:04:20.262 10:31:59 -- common/autobuild_common.sh@502 -- $ get_config_params 00:04:20.262 10:31:59 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:20.262 10:31:59 -- common/autotest_common.sh@10 -- $ set +x 00:04:20.262 10:31:59 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:04:20.262 10:31:59 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:04:20.262 10:31:59 -- pm/common@17 -- $ local monitor 00:04:20.262 10:31:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.262 10:31:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.262 10:31:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.262 10:31:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:20.262 10:31:59 -- pm/common@21 -- $ date +%s 00:04:20.262 10:31:59 -- pm/common@25 -- $ sleep 1 00:04:20.262 10:31:59 -- pm/common@21 -- $ date +%s 00:04:20.262 10:31:59 -- pm/common@21 -- $ date +%s 00:04:20.262 10:31:59 -- pm/common@21 -- $ date +%s 00:04:20.262 10:31:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008719 00:04:20.262 10:31:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008719 00:04:20.262 10:31:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008719 00:04:20.262 10:31:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008719 00:04:20.262 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008719_collect-cpu-load.pm.log 00:04:20.262 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008719_collect-vmstat.pm.log 00:04:20.262 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008719_collect-cpu-temp.pm.log 00:04:20.262 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008719_collect-bmc-pm.bmc.pm.log 00:04:21.252 10:32:00 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:04:21.252 10:32:00 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:21.252 10:32:00 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:21.252 10:32:00 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:21.252 10:32:00 -- spdk/autobuild.sh@16 -- $ date -u 00:04:21.252 Tue Nov 19 09:32:00 AM UTC 2024 00:04:21.252 10:32:00 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:21.252 v25.01-pre-191-g03b7aa9c7 00:04:21.252 10:32:00 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:21.252 10:32:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:21.252 10:32:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:21.252 10:32:00 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:21.252 10:32:00 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:21.252 10:32:00 -- common/autotest_common.sh@10 -- $ set +x 00:04:21.252 ************************************ 00:04:21.252 START TEST ubsan 00:04:21.252 ************************************ 00:04:21.252 10:32:00 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:21.252 using ubsan 00:04:21.252 00:04:21.252 real 0m0.001s 00:04:21.252 user 0m0.000s 00:04:21.252 sys 0m0.000s 00:04:21.252 10:32:00 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:21.252 10:32:00 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:21.252 ************************************ 00:04:21.252 END TEST ubsan 00:04:21.252 ************************************ 00:04:21.550 10:32:00 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:21.550 10:32:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:21.550 10:32:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:21.550 10:32:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:21.550 10:32:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:21.550 10:32:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:21.550 10:32:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:21.550 10:32:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:21.550 10:32:00 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:04:21.550 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:04:21.550 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:21.811 Using 'verbs' RDMA provider 00:04:37.670 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:49.904 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:50.742 Creating mk/config.mk...done. 00:04:50.742 Creating mk/cc.flags.mk...done. 00:04:50.742 Type 'make' to build. 00:04:50.742 10:32:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:04:50.742 10:32:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:50.742 10:32:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:50.742 10:32:29 -- common/autotest_common.sh@10 -- $ set +x 00:04:50.742 ************************************ 00:04:50.742 START TEST make 00:04:50.742 ************************************ 00:04:50.742 10:32:29 make -- common/autotest_common.sh@1129 -- $ make -j144 00:04:51.003 make[1]: Nothing to be done for 'all'. 00:04:52.393 The Meson build system 00:04:52.393 Version: 1.5.0 00:04:52.393 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:52.393 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:52.393 Build type: native build 00:04:52.393 Project name: libvfio-user 00:04:52.393 Project version: 0.0.1 00:04:52.393 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:52.393 C linker for the host machine: cc ld.bfd 2.40-14 00:04:52.393 Host machine cpu family: x86_64 00:04:52.393 Host machine cpu: x86_64 00:04:52.393 Run-time dependency threads found: YES 00:04:52.393 Library dl found: YES 00:04:52.393 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:52.393 Run-time dependency json-c found: YES 0.17 00:04:52.393 Run-time dependency cmocka found: YES 1.1.7 00:04:52.393 Program pytest-3 found: NO 00:04:52.393 Program flake8 found: NO 00:04:52.393 Program misspell-fixer found: NO 00:04:52.393 Program restructuredtext-lint found: NO 00:04:52.393 Program valgrind found: YES (/usr/bin/valgrind) 00:04:52.393 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:52.393 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:52.393 Compiler for C supports arguments -Wwrite-strings: YES 00:04:52.393 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:52.393 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:52.393 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:52.394 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:52.394 Build targets in project: 8 00:04:52.394 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:52.394 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:52.394 00:04:52.394 libvfio-user 0.0.1 00:04:52.394 00:04:52.394 User defined options 00:04:52.394 buildtype : debug 00:04:52.394 default_library: shared 00:04:52.394 libdir : /usr/local/lib 00:04:52.394 00:04:52.394 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:52.964 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:52.964 [1/37] Compiling C object samples/null.p/null.c.o 00:04:52.964 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:52.964 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:52.964 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:52.964 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:52.964 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:52.964 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:52.964 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:52.964 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:52.964 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:52.964 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:52.964 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:52.964 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:52.964 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:52.964 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:52.964 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:52.964 [17/37] Compiling C object samples/server.p/server.c.o 00:04:52.964 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:52.964 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:52.964 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:52.964 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:52.964 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:52.964 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:52.964 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:52.964 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:53.224 [26/37] Compiling C object samples/client.p/client.c.o 00:04:53.224 [27/37] Linking target samples/client 00:04:53.224 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:53.224 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:53.224 [30/37] Linking target test/unit_tests 00:04:53.224 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:04:53.487 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:53.487 [33/37] Linking target samples/null 00:04:53.487 [34/37] Linking target samples/server 00:04:53.487 [35/37] Linking target samples/lspci 00:04:53.487 [36/37] Linking target samples/shadow_ioeventfd_server 00:04:53.487 [37/37] Linking target samples/gpio-pci-idio-16 00:04:53.487 INFO: autodetecting backend as ninja 00:04:53.487 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:53.487 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:53.748 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:53.748 ninja: no work to do. 00:05:00.339 The Meson build system 00:05:00.339 Version: 1.5.0 00:05:00.339 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:05:00.339 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:05:00.339 Build type: native build 00:05:00.339 Program cat found: YES (/usr/bin/cat) 00:05:00.339 Project name: DPDK 00:05:00.339 Project version: 24.03.0 00:05:00.339 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:00.339 C linker for the host machine: cc ld.bfd 2.40-14 00:05:00.339 Host machine cpu family: x86_64 00:05:00.339 Host machine cpu: x86_64 00:05:00.339 Message: ## Building in Developer Mode ## 00:05:00.339 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:00.339 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:05:00.339 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:00.339 Program python3 found: YES (/usr/bin/python3) 00:05:00.339 Program cat found: YES (/usr/bin/cat) 00:05:00.339 Compiler for C supports arguments -march=native: YES 00:05:00.339 Checking for size of "void *" : 8 00:05:00.339 Checking for size of "void *" : 8 (cached) 00:05:00.339 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:00.339 Library m found: YES 00:05:00.339 Library numa found: YES 00:05:00.339 Has header "numaif.h" : YES 00:05:00.339 Library fdt found: NO 00:05:00.339 Library execinfo found: NO 00:05:00.339 Has header "execinfo.h" : YES 00:05:00.339 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:00.339 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:00.339 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:00.339 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:00.339 Run-time dependency openssl found: YES 3.1.1 00:05:00.339 Run-time dependency libpcap found: YES 1.10.4 00:05:00.339 Has header "pcap.h" with dependency libpcap: YES 00:05:00.339 Compiler for C supports arguments -Wcast-qual: YES 00:05:00.339 Compiler for C supports arguments -Wdeprecated: YES 00:05:00.339 Compiler for C supports arguments -Wformat: YES 00:05:00.339 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:00.339 Compiler for C supports arguments -Wformat-security: NO 00:05:00.339 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:00.339 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:00.339 Compiler for C supports arguments -Wnested-externs: YES 00:05:00.339 Compiler for C supports arguments -Wold-style-definition: YES 00:05:00.339 Compiler for C supports arguments -Wpointer-arith: YES 00:05:00.339 Compiler for C supports arguments -Wsign-compare: YES 00:05:00.339 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:00.339 Compiler for C supports arguments -Wundef: YES 00:05:00.339 Compiler for C supports arguments -Wwrite-strings: YES 00:05:00.339 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:00.339 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:00.339 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:00.339 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:00.339 Program objdump found: YES (/usr/bin/objdump) 00:05:00.339 Compiler for C supports arguments -mavx512f: YES 00:05:00.339 Checking if "AVX512 checking" compiles: YES 00:05:00.339 Fetching value of define "__SSE4_2__" : 1 00:05:00.339 Fetching value of define "__AES__" : 1 00:05:00.339 Fetching value of define "__AVX__" : 1 00:05:00.339 Fetching value of define "__AVX2__" : 1 00:05:00.339 Fetching value of define "__AVX512BW__" : 1 00:05:00.339 Fetching value of define "__AVX512CD__" : 1 00:05:00.339 Fetching value of define "__AVX512DQ__" : 1 00:05:00.339 Fetching value of define "__AVX512F__" : 1 00:05:00.339 Fetching value of define "__AVX512VL__" : 1 00:05:00.339 Fetching value of define "__PCLMUL__" : 1 00:05:00.339 Fetching value of define "__RDRND__" : 1 00:05:00.339 Fetching value of define "__RDSEED__" : 1 00:05:00.340 Fetching value of define "__VPCLMULQDQ__" : 1 00:05:00.340 Fetching value of define "__znver1__" : (undefined) 00:05:00.340 Fetching value of define "__znver2__" : (undefined) 00:05:00.340 Fetching value of define "__znver3__" : (undefined) 00:05:00.340 Fetching value of define "__znver4__" : (undefined) 00:05:00.340 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:00.340 Message: lib/log: Defining dependency "log" 00:05:00.340 Message: lib/kvargs: Defining dependency "kvargs" 00:05:00.340 Message: lib/telemetry: Defining dependency "telemetry" 00:05:00.340 Checking for function "getentropy" : NO 00:05:00.340 Message: lib/eal: Defining dependency "eal" 00:05:00.340 Message: lib/ring: Defining dependency "ring" 00:05:00.340 Message: lib/rcu: Defining dependency "rcu" 00:05:00.340 Message: lib/mempool: Defining dependency "mempool" 00:05:00.340 Message: lib/mbuf: Defining dependency "mbuf" 00:05:00.340 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:00.340 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:00.340 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:00.340 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:00.340 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:00.340 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:05:00.340 Compiler for C supports arguments -mpclmul: YES 00:05:00.340 Compiler for C supports arguments -maes: YES 00:05:00.340 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:00.340 Compiler for C supports arguments -mavx512bw: YES 00:05:00.340 Compiler for C supports arguments -mavx512dq: YES 00:05:00.340 Compiler for C supports arguments -mavx512vl: YES 00:05:00.340 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:00.340 Compiler for C supports arguments -mavx2: YES 00:05:00.340 Compiler for C supports arguments -mavx: YES 00:05:00.340 Message: lib/net: Defining dependency "net" 00:05:00.340 Message: lib/meter: Defining dependency "meter" 00:05:00.340 Message: lib/ethdev: Defining dependency "ethdev" 00:05:00.340 Message: lib/pci: Defining dependency "pci" 00:05:00.340 Message: lib/cmdline: Defining dependency "cmdline" 00:05:00.340 Message: lib/hash: Defining dependency "hash" 00:05:00.340 Message: lib/timer: Defining dependency "timer" 00:05:00.340 Message: lib/compressdev: Defining dependency "compressdev" 00:05:00.340 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:00.340 Message: lib/dmadev: Defining dependency "dmadev" 00:05:00.340 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:00.340 Message: lib/power: Defining dependency "power" 00:05:00.340 Message: lib/reorder: Defining dependency "reorder" 00:05:00.340 Message: lib/security: Defining dependency "security" 00:05:00.340 Has header "linux/userfaultfd.h" : YES 00:05:00.340 Has header "linux/vduse.h" : YES 00:05:00.340 Message: lib/vhost: Defining dependency "vhost" 00:05:00.340 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:00.340 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:00.340 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:00.340 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:00.340 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:00.340 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:00.340 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:00.340 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:00.340 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:00.340 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:00.340 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:00.340 Configuring doxy-api-html.conf using configuration 00:05:00.340 Configuring doxy-api-man.conf using configuration 00:05:00.340 Program mandb found: YES (/usr/bin/mandb) 00:05:00.340 Program sphinx-build found: NO 00:05:00.340 Configuring rte_build_config.h using configuration 00:05:00.340 Message: 00:05:00.340 ================= 00:05:00.340 Applications Enabled 00:05:00.340 ================= 00:05:00.340 00:05:00.340 apps: 00:05:00.340 00:05:00.340 00:05:00.340 Message: 00:05:00.340 ================= 00:05:00.340 Libraries Enabled 00:05:00.340 ================= 00:05:00.340 00:05:00.340 libs: 00:05:00.340 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:00.340 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:00.340 cryptodev, dmadev, power, reorder, security, vhost, 00:05:00.340 00:05:00.340 Message: 00:05:00.340 =============== 00:05:00.340 Drivers Enabled 00:05:00.340 =============== 00:05:00.340 00:05:00.340 common: 00:05:00.340 00:05:00.340 bus: 00:05:00.340 pci, vdev, 00:05:00.340 mempool: 00:05:00.340 ring, 00:05:00.340 dma: 00:05:00.340 00:05:00.340 net: 00:05:00.340 00:05:00.340 crypto: 00:05:00.340 00:05:00.340 compress: 00:05:00.340 00:05:00.340 vdpa: 00:05:00.340 00:05:00.340 00:05:00.340 Message: 00:05:00.340 ================= 00:05:00.340 Content Skipped 00:05:00.340 ================= 00:05:00.340 00:05:00.340 apps: 00:05:00.340 dumpcap: explicitly disabled via build config 00:05:00.340 graph: explicitly disabled via build config 00:05:00.340 pdump: explicitly disabled via build config 00:05:00.340 proc-info: explicitly disabled via build config 00:05:00.340 test-acl: explicitly disabled via build config 00:05:00.340 test-bbdev: explicitly disabled via build config 00:05:00.340 test-cmdline: explicitly disabled via build config 00:05:00.340 test-compress-perf: explicitly disabled via build config 00:05:00.340 test-crypto-perf: explicitly disabled via build config 00:05:00.340 test-dma-perf: explicitly disabled via build config 00:05:00.340 test-eventdev: explicitly disabled via build config 00:05:00.340 test-fib: explicitly disabled via build config 00:05:00.340 test-flow-perf: explicitly disabled via build config 00:05:00.340 test-gpudev: explicitly disabled via build config 00:05:00.340 test-mldev: explicitly disabled via build config 00:05:00.340 test-pipeline: explicitly disabled via build config 00:05:00.340 test-pmd: explicitly disabled via build config 00:05:00.340 test-regex: explicitly disabled via build config 00:05:00.340 test-sad: explicitly disabled via build config 00:05:00.340 test-security-perf: explicitly disabled via build config 00:05:00.340 00:05:00.340 libs: 00:05:00.340 argparse: explicitly disabled via build config 00:05:00.340 metrics: explicitly disabled via build config 00:05:00.340 acl: explicitly disabled via build config 00:05:00.340 bbdev: explicitly disabled via build config 00:05:00.340 bitratestats: explicitly disabled via build config 00:05:00.340 bpf: explicitly disabled via build config 00:05:00.340 cfgfile: explicitly disabled via build config 00:05:00.340 distributor: explicitly disabled via build config 00:05:00.340 efd: explicitly disabled via build config 00:05:00.340 eventdev: explicitly disabled via build config 00:05:00.340 dispatcher: explicitly disabled via build config 00:05:00.340 gpudev: explicitly disabled via build config 00:05:00.340 gro: explicitly disabled via build config 00:05:00.340 gso: explicitly disabled via build config 00:05:00.340 ip_frag: explicitly disabled via build config 00:05:00.340 jobstats: explicitly disabled via build config 00:05:00.340 latencystats: explicitly disabled via build config 00:05:00.340 lpm: explicitly disabled via build config 00:05:00.340 member: explicitly disabled via build config 00:05:00.340 pcapng: explicitly disabled via build config 00:05:00.340 rawdev: explicitly disabled via build config 00:05:00.340 regexdev: explicitly disabled via build config 00:05:00.340 mldev: explicitly disabled via build config 00:05:00.340 rib: explicitly disabled via build config 00:05:00.340 sched: explicitly disabled via build config 00:05:00.341 stack: explicitly disabled via build config 00:05:00.341 ipsec: explicitly disabled via build config 00:05:00.341 pdcp: explicitly disabled via build config 00:05:00.341 fib: explicitly disabled via build config 00:05:00.341 port: explicitly disabled via build config 00:05:00.341 pdump: explicitly disabled via build config 00:05:00.341 table: explicitly disabled via build config 00:05:00.341 pipeline: explicitly disabled via build config 00:05:00.341 graph: explicitly disabled via build config 00:05:00.341 node: explicitly disabled via build config 00:05:00.341 00:05:00.341 drivers: 00:05:00.341 common/cpt: not in enabled drivers build config 00:05:00.341 common/dpaax: not in enabled drivers build config 00:05:00.341 common/iavf: not in enabled drivers build config 00:05:00.341 common/idpf: not in enabled drivers build config 00:05:00.341 common/ionic: not in enabled drivers build config 00:05:00.341 common/mvep: not in enabled drivers build config 00:05:00.341 common/octeontx: not in enabled drivers build config 00:05:00.341 bus/auxiliary: not in enabled drivers build config 00:05:00.341 bus/cdx: not in enabled drivers build config 00:05:00.341 bus/dpaa: not in enabled drivers build config 00:05:00.341 bus/fslmc: not in enabled drivers build config 00:05:00.341 bus/ifpga: not in enabled drivers build config 00:05:00.341 bus/platform: not in enabled drivers build config 00:05:00.341 bus/uacce: not in enabled drivers build config 00:05:00.341 bus/vmbus: not in enabled drivers build config 00:05:00.341 common/cnxk: not in enabled drivers build config 00:05:00.341 common/mlx5: not in enabled drivers build config 00:05:00.341 common/nfp: not in enabled drivers build config 00:05:00.341 common/nitrox: not in enabled drivers build config 00:05:00.341 common/qat: not in enabled drivers build config 00:05:00.341 common/sfc_efx: not in enabled drivers build config 00:05:00.341 mempool/bucket: not in enabled drivers build config 00:05:00.341 mempool/cnxk: not in enabled drivers build config 00:05:00.341 mempool/dpaa: not in enabled drivers build config 00:05:00.341 mempool/dpaa2: not in enabled drivers build config 00:05:00.341 mempool/octeontx: not in enabled drivers build config 00:05:00.341 mempool/stack: not in enabled drivers build config 00:05:00.341 dma/cnxk: not in enabled drivers build config 00:05:00.341 dma/dpaa: not in enabled drivers build config 00:05:00.341 dma/dpaa2: not in enabled drivers build config 00:05:00.341 dma/hisilicon: not in enabled drivers build config 00:05:00.341 dma/idxd: not in enabled drivers build config 00:05:00.341 dma/ioat: not in enabled drivers build config 00:05:00.341 dma/skeleton: not in enabled drivers build config 00:05:00.341 net/af_packet: not in enabled drivers build config 00:05:00.341 net/af_xdp: not in enabled drivers build config 00:05:00.341 net/ark: not in enabled drivers build config 00:05:00.341 net/atlantic: not in enabled drivers build config 00:05:00.341 net/avp: not in enabled drivers build config 00:05:00.341 net/axgbe: not in enabled drivers build config 00:05:00.341 net/bnx2x: not in enabled drivers build config 00:05:00.341 net/bnxt: not in enabled drivers build config 00:05:00.341 net/bonding: not in enabled drivers build config 00:05:00.341 net/cnxk: not in enabled drivers build config 00:05:00.341 net/cpfl: not in enabled drivers build config 00:05:00.341 net/cxgbe: not in enabled drivers build config 00:05:00.341 net/dpaa: not in enabled drivers build config 00:05:00.341 net/dpaa2: not in enabled drivers build config 00:05:00.341 net/e1000: not in enabled drivers build config 00:05:00.341 net/ena: not in enabled drivers build config 00:05:00.341 net/enetc: not in enabled drivers build config 00:05:00.341 net/enetfec: not in enabled drivers build config 00:05:00.341 net/enic: not in enabled drivers build config 00:05:00.341 net/failsafe: not in enabled drivers build config 00:05:00.341 net/fm10k: not in enabled drivers build config 00:05:00.341 net/gve: not in enabled drivers build config 00:05:00.341 net/hinic: not in enabled drivers build config 00:05:00.341 net/hns3: not in enabled drivers build config 00:05:00.341 net/i40e: not in enabled drivers build config 00:05:00.341 net/iavf: not in enabled drivers build config 00:05:00.341 net/ice: not in enabled drivers build config 00:05:00.341 net/idpf: not in enabled drivers build config 00:05:00.341 net/igc: not in enabled drivers build config 00:05:00.341 net/ionic: not in enabled drivers build config 00:05:00.341 net/ipn3ke: not in enabled drivers build config 00:05:00.341 net/ixgbe: not in enabled drivers build config 00:05:00.341 net/mana: not in enabled drivers build config 00:05:00.341 net/memif: not in enabled drivers build config 00:05:00.341 net/mlx4: not in enabled drivers build config 00:05:00.341 net/mlx5: not in enabled drivers build config 00:05:00.341 net/mvneta: not in enabled drivers build config 00:05:00.341 net/mvpp2: not in enabled drivers build config 00:05:00.341 net/netvsc: not in enabled drivers build config 00:05:00.341 net/nfb: not in enabled drivers build config 00:05:00.341 net/nfp: not in enabled drivers build config 00:05:00.341 net/ngbe: not in enabled drivers build config 00:05:00.341 net/null: not in enabled drivers build config 00:05:00.341 net/octeontx: not in enabled drivers build config 00:05:00.341 net/octeon_ep: not in enabled drivers build config 00:05:00.341 net/pcap: not in enabled drivers build config 00:05:00.341 net/pfe: not in enabled drivers build config 00:05:00.341 net/qede: not in enabled drivers build config 00:05:00.341 net/ring: not in enabled drivers build config 00:05:00.341 net/sfc: not in enabled drivers build config 00:05:00.341 net/softnic: not in enabled drivers build config 00:05:00.341 net/tap: not in enabled drivers build config 00:05:00.341 net/thunderx: not in enabled drivers build config 00:05:00.341 net/txgbe: not in enabled drivers build config 00:05:00.341 net/vdev_netvsc: not in enabled drivers build config 00:05:00.341 net/vhost: not in enabled drivers build config 00:05:00.341 net/virtio: not in enabled drivers build config 00:05:00.341 net/vmxnet3: not in enabled drivers build config 00:05:00.341 raw/*: missing internal dependency, "rawdev" 00:05:00.341 crypto/armv8: not in enabled drivers build config 00:05:00.341 crypto/bcmfs: not in enabled drivers build config 00:05:00.341 crypto/caam_jr: not in enabled drivers build config 00:05:00.341 crypto/ccp: not in enabled drivers build config 00:05:00.341 crypto/cnxk: not in enabled drivers build config 00:05:00.341 crypto/dpaa_sec: not in enabled drivers build config 00:05:00.341 crypto/dpaa2_sec: not in enabled drivers build config 00:05:00.341 crypto/ipsec_mb: not in enabled drivers build config 00:05:00.341 crypto/mlx5: not in enabled drivers build config 00:05:00.341 crypto/mvsam: not in enabled drivers build config 00:05:00.341 crypto/nitrox: not in enabled drivers build config 00:05:00.341 crypto/null: not in enabled drivers build config 00:05:00.341 crypto/octeontx: not in enabled drivers build config 00:05:00.341 crypto/openssl: not in enabled drivers build config 00:05:00.341 crypto/scheduler: not in enabled drivers build config 00:05:00.341 crypto/uadk: not in enabled drivers build config 00:05:00.341 crypto/virtio: not in enabled drivers build config 00:05:00.341 compress/isal: not in enabled drivers build config 00:05:00.341 compress/mlx5: not in enabled drivers build config 00:05:00.341 compress/nitrox: not in enabled drivers build config 00:05:00.341 compress/octeontx: not in enabled drivers build config 00:05:00.341 compress/zlib: not in enabled drivers build config 00:05:00.341 regex/*: missing internal dependency, "regexdev" 00:05:00.341 ml/*: missing internal dependency, "mldev" 00:05:00.341 vdpa/ifc: not in enabled drivers build config 00:05:00.341 vdpa/mlx5: not in enabled drivers build config 00:05:00.341 vdpa/nfp: not in enabled drivers build config 00:05:00.341 vdpa/sfc: not in enabled drivers build config 00:05:00.341 event/*: missing internal dependency, "eventdev" 00:05:00.341 baseband/*: missing internal dependency, "bbdev" 00:05:00.341 gpu/*: missing internal dependency, "gpudev" 00:05:00.341 00:05:00.341 00:05:00.341 Build targets in project: 84 00:05:00.341 00:05:00.341 DPDK 24.03.0 00:05:00.341 00:05:00.341 User defined options 00:05:00.341 buildtype : debug 00:05:00.341 default_library : shared 00:05:00.341 libdir : lib 00:05:00.341 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:00.341 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:00.341 c_link_args : 00:05:00.341 cpu_instruction_set: native 00:05:00.341 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:05:00.341 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:05:00.341 enable_docs : false 00:05:00.341 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:00.341 enable_kmods : false 00:05:00.341 max_lcores : 128 00:05:00.341 tests : false 00:05:00.341 00:05:00.341 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:00.342 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:05:00.342 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:00.342 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:00.342 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:00.342 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:00.342 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:00.342 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:00.342 [7/267] Linking static target lib/librte_kvargs.a 00:05:00.342 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:00.342 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:00.342 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:00.342 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:00.342 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:00.342 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:00.342 [14/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:00.342 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:00.342 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:00.342 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:00.342 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:00.342 [19/267] Linking static target lib/librte_log.a 00:05:00.342 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:00.342 [21/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:00.342 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:00.342 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:00.342 [24/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:00.342 [25/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:00.342 [26/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:00.342 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:00.342 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:00.342 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:00.342 [30/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:00.342 [31/267] Linking static target lib/librte_pci.a 00:05:00.342 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:00.342 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:00.342 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:00.342 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:00.601 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:00.601 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:00.601 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:00.601 [39/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.601 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:00.601 [41/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:00.601 [42/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:00.601 [43/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:00.601 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:00.601 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:00.601 [46/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.601 [47/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:00.601 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:00.601 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:00.601 [50/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:00.601 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:00.601 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:00.601 [53/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:00.601 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:00.601 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:00.601 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:00.601 [57/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:00.601 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:00.860 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:00.860 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:00.860 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:00.860 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:00.860 [63/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:00.860 [64/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:00.860 [65/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:00.860 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:00.860 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:00.860 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:00.860 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:00.860 [70/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:00.860 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:00.860 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:00.860 [73/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:05:00.860 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:00.860 [75/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:00.860 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:00.860 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:00.860 [78/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:00.860 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:00.860 [80/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:00.860 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:00.860 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:00.860 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:00.860 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:00.860 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:00.860 [86/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:00.860 [87/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:00.860 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:00.860 [89/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:00.860 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:00.860 [91/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:00.860 [92/267] Linking static target lib/librte_telemetry.a 00:05:00.860 [93/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:00.860 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:00.860 [95/267] Linking static target lib/librte_meter.a 00:05:00.860 [96/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:00.860 [97/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:00.860 [98/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:00.860 [99/267] Linking static target lib/librte_rcu.a 00:05:00.860 [100/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:00.860 [101/267] Linking static target lib/librte_ring.a 00:05:00.860 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:00.860 [103/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:00.860 [104/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:00.860 [105/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:00.860 [106/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:00.860 [107/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:00.860 [108/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:00.860 [109/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:00.860 [110/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:00.860 [111/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:00.860 [112/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:00.860 [113/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:00.860 [114/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:00.860 [115/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:00.860 [116/267] Linking static target lib/librte_cmdline.a 00:05:00.860 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:00.860 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:00.860 [119/267] Linking static target lib/librte_timer.a 00:05:00.860 [120/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:00.860 [121/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:00.860 [122/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:00.860 [123/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:00.860 [124/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:00.860 [125/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:00.860 [126/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:00.860 [127/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:00.860 [128/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:00.860 [129/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:00.860 [130/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:00.860 [131/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:00.860 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:00.860 [133/267] Linking static target lib/librte_compressdev.a 00:05:00.860 [134/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:00.860 [135/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:00.860 [136/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:00.860 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:00.860 [138/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:00.860 [139/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:00.860 [140/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:00.860 [141/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:00.860 [142/267] Linking static target lib/librte_net.a 00:05:00.860 [143/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:00.860 [144/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:00.860 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:00.860 [146/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.860 [147/267] Linking static target lib/librte_dmadev.a 00:05:00.860 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:00.860 [149/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:00.860 [150/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:00.861 [151/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:00.861 [152/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:00.861 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:00.861 [154/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:00.861 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:00.861 [156/267] Linking target lib/librte_log.so.24.1 00:05:00.861 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:00.861 [158/267] Linking static target lib/librte_mempool.a 00:05:00.861 [159/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:00.861 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:00.861 [161/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:00.861 [162/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:00.861 [163/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:00.861 [164/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:00.861 [165/267] Linking static target lib/librte_power.a 00:05:00.861 [166/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:00.861 [167/267] Linking static target lib/librte_eal.a 00:05:00.861 [168/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:00.861 [169/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:00.861 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:00.861 [171/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:01.122 [172/267] Linking static target lib/librte_security.a 00:05:01.122 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:01.122 [174/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:01.122 [175/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:01.122 [176/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:01.122 [177/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:01.122 [178/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:01.122 [179/267] Linking static target lib/librte_reorder.a 00:05:01.122 [180/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:01.122 [181/267] Linking static target lib/librte_mbuf.a 00:05:01.122 [182/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:01.122 [183/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:01.122 [184/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:01.122 [185/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:01.122 [186/267] Linking static target drivers/librte_bus_vdev.a 00:05:01.122 [187/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.122 [188/267] Linking target lib/librte_kvargs.so.24.1 00:05:01.122 [189/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:01.122 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:01.122 [191/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.122 [192/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:01.122 [193/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.122 [194/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:01.123 [195/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:01.123 [196/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:01.123 [197/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:01.123 [198/267] Linking static target lib/librte_hash.a 00:05:01.123 [199/267] Linking static target drivers/librte_bus_pci.a 00:05:01.123 [200/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.123 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:01.384 [202/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:01.384 [203/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:01.384 [204/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:01.384 [205/267] Linking static target drivers/librte_mempool_ring.a 00:05:01.384 [206/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:01.384 [207/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:01.384 [208/267] Linking static target lib/librte_cryptodev.a 00:05:01.384 [209/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.384 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.384 [211/267] Linking target lib/librte_telemetry.so.24.1 00:05:01.645 [212/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.645 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.645 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:01.645 [215/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.645 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.645 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.645 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:01.906 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:01.906 [220/267] Linking static target lib/librte_ethdev.a 00:05:01.906 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.906 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.906 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.167 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.167 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.167 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.428 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:02.428 [228/267] Linking static target lib/librte_vhost.a 00:05:03.814 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.756 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.346 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.731 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.731 [233/267] Linking target lib/librte_eal.so.24.1 00:05:12.731 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:12.731 [235/267] Linking target lib/librte_meter.so.24.1 00:05:12.731 [236/267] Linking target lib/librte_ring.so.24.1 00:05:12.731 [237/267] Linking target lib/librte_pci.so.24.1 00:05:12.731 [238/267] Linking target lib/librte_dmadev.so.24.1 00:05:12.731 [239/267] Linking target lib/librte_timer.so.24.1 00:05:12.731 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:05:12.991 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:12.992 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:12.992 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:12.992 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:12.992 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:12.992 [246/267] Linking target lib/librte_rcu.so.24.1 00:05:12.992 [247/267] Linking target lib/librte_mempool.so.24.1 00:05:12.992 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:05:13.252 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:13.252 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:13.252 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:05:13.252 [252/267] Linking target lib/librte_mbuf.so.24.1 00:05:13.252 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:13.252 [254/267] Linking target lib/librte_reorder.so.24.1 00:05:13.252 [255/267] Linking target lib/librte_net.so.24.1 00:05:13.252 [256/267] Linking target lib/librte_compressdev.so.24.1 00:05:13.512 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:05:13.512 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:13.512 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:13.512 [260/267] Linking target lib/librte_cmdline.so.24.1 00:05:13.512 [261/267] Linking target lib/librte_hash.so.24.1 00:05:13.512 [262/267] Linking target lib/librte_ethdev.so.24.1 00:05:13.512 [263/267] Linking target lib/librte_security.so.24.1 00:05:13.773 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:13.773 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:13.773 [266/267] Linking target lib/librte_power.so.24.1 00:05:13.773 [267/267] Linking target lib/librte_vhost.so.24.1 00:05:13.773 INFO: autodetecting backend as ninja 00:05:13.773 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:05:17.978 CC lib/ut_mock/mock.o 00:05:17.978 CC lib/log/log.o 00:05:17.978 CC lib/log/log_flags.o 00:05:17.978 CC lib/log/log_deprecated.o 00:05:17.978 CC lib/ut/ut.o 00:05:17.978 LIB libspdk_ut_mock.a 00:05:17.978 LIB libspdk_log.a 00:05:17.978 LIB libspdk_ut.a 00:05:17.978 SO libspdk_ut_mock.so.6.0 00:05:17.978 SO libspdk_log.so.7.1 00:05:17.978 SO libspdk_ut.so.2.0 00:05:18.239 SYMLINK libspdk_ut_mock.so 00:05:18.239 SYMLINK libspdk_log.so 00:05:18.239 SYMLINK libspdk_ut.so 00:05:18.499 CC lib/dma/dma.o 00:05:18.499 CC lib/util/base64.o 00:05:18.499 CC lib/util/bit_array.o 00:05:18.499 CC lib/util/cpuset.o 00:05:18.499 CC lib/util/crc16.o 00:05:18.499 CC lib/util/crc32.o 00:05:18.499 CC lib/util/crc32c.o 00:05:18.499 CC lib/util/crc32_ieee.o 00:05:18.500 CC lib/ioat/ioat.o 00:05:18.500 CC lib/util/crc64.o 00:05:18.500 CXX lib/trace_parser/trace.o 00:05:18.500 CC lib/util/dif.o 00:05:18.500 CC lib/util/fd.o 00:05:18.500 CC lib/util/fd_group.o 00:05:18.500 CC lib/util/file.o 00:05:18.500 CC lib/util/hexlify.o 00:05:18.500 CC lib/util/iov.o 00:05:18.500 CC lib/util/math.o 00:05:18.500 CC lib/util/net.o 00:05:18.500 CC lib/util/pipe.o 00:05:18.500 CC lib/util/strerror_tls.o 00:05:18.500 CC lib/util/string.o 00:05:18.500 CC lib/util/uuid.o 00:05:18.500 CC lib/util/xor.o 00:05:18.500 CC lib/util/zipf.o 00:05:18.500 CC lib/util/md5.o 00:05:18.759 CC lib/vfio_user/host/vfio_user_pci.o 00:05:18.759 CC lib/vfio_user/host/vfio_user.o 00:05:18.759 LIB libspdk_dma.a 00:05:18.759 SO libspdk_dma.so.5.0 00:05:18.759 LIB libspdk_ioat.a 00:05:18.759 SYMLINK libspdk_dma.so 00:05:18.759 SO libspdk_ioat.so.7.0 00:05:19.020 SYMLINK libspdk_ioat.so 00:05:19.020 LIB libspdk_vfio_user.a 00:05:19.020 SO libspdk_vfio_user.so.5.0 00:05:19.020 LIB libspdk_util.a 00:05:19.020 SYMLINK libspdk_vfio_user.so 00:05:19.020 SO libspdk_util.so.10.1 00:05:19.281 SYMLINK libspdk_util.so 00:05:19.281 LIB libspdk_trace_parser.a 00:05:19.543 SO libspdk_trace_parser.so.6.0 00:05:19.543 SYMLINK libspdk_trace_parser.so 00:05:19.543 CC lib/rdma_utils/rdma_utils.o 00:05:19.543 CC lib/json/json_parse.o 00:05:19.543 CC lib/json/json_util.o 00:05:19.543 CC lib/json/json_write.o 00:05:19.543 CC lib/env_dpdk/env.o 00:05:19.543 CC lib/idxd/idxd.o 00:05:19.543 CC lib/conf/conf.o 00:05:19.543 CC lib/vmd/vmd.o 00:05:19.543 CC lib/env_dpdk/memory.o 00:05:19.543 CC lib/idxd/idxd_user.o 00:05:19.543 CC lib/vmd/led.o 00:05:19.543 CC lib/env_dpdk/pci.o 00:05:19.543 CC lib/idxd/idxd_kernel.o 00:05:19.543 CC lib/env_dpdk/init.o 00:05:19.543 CC lib/env_dpdk/threads.o 00:05:19.543 CC lib/env_dpdk/pci_ioat.o 00:05:19.543 CC lib/env_dpdk/pci_virtio.o 00:05:19.543 CC lib/env_dpdk/pci_vmd.o 00:05:19.543 CC lib/env_dpdk/pci_idxd.o 00:05:19.543 CC lib/env_dpdk/pci_event.o 00:05:19.543 CC lib/env_dpdk/sigbus_handler.o 00:05:19.543 CC lib/env_dpdk/pci_dpdk.o 00:05:19.543 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:19.543 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:19.805 LIB libspdk_conf.a 00:05:19.805 LIB libspdk_rdma_utils.a 00:05:19.805 SO libspdk_conf.so.6.0 00:05:20.066 LIB libspdk_json.a 00:05:20.066 SO libspdk_rdma_utils.so.1.0 00:05:20.066 SO libspdk_json.so.6.0 00:05:20.066 SYMLINK libspdk_conf.so 00:05:20.066 SYMLINK libspdk_rdma_utils.so 00:05:20.066 SYMLINK libspdk_json.so 00:05:20.066 LIB libspdk_idxd.a 00:05:20.327 SO libspdk_idxd.so.12.1 00:05:20.327 LIB libspdk_vmd.a 00:05:20.327 SO libspdk_vmd.so.6.0 00:05:20.327 SYMLINK libspdk_idxd.so 00:05:20.327 SYMLINK libspdk_vmd.so 00:05:20.327 CC lib/rdma_provider/common.o 00:05:20.327 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:20.327 CC lib/jsonrpc/jsonrpc_server.o 00:05:20.327 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:20.327 CC lib/jsonrpc/jsonrpc_client.o 00:05:20.327 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:20.588 LIB libspdk_rdma_provider.a 00:05:20.588 LIB libspdk_jsonrpc.a 00:05:20.588 SO libspdk_rdma_provider.so.7.0 00:05:20.588 SO libspdk_jsonrpc.so.6.0 00:05:20.849 SYMLINK libspdk_rdma_provider.so 00:05:20.849 SYMLINK libspdk_jsonrpc.so 00:05:20.849 LIB libspdk_env_dpdk.a 00:05:20.849 SO libspdk_env_dpdk.so.15.1 00:05:21.110 SYMLINK libspdk_env_dpdk.so 00:05:21.110 CC lib/rpc/rpc.o 00:05:21.372 LIB libspdk_rpc.a 00:05:21.372 SO libspdk_rpc.so.6.0 00:05:21.372 SYMLINK libspdk_rpc.so 00:05:21.945 CC lib/trace/trace.o 00:05:21.945 CC lib/trace/trace_flags.o 00:05:21.945 CC lib/trace/trace_rpc.o 00:05:21.945 CC lib/notify/notify.o 00:05:21.945 CC lib/keyring/keyring.o 00:05:21.945 CC lib/notify/notify_rpc.o 00:05:21.945 CC lib/keyring/keyring_rpc.o 00:05:21.945 LIB libspdk_notify.a 00:05:21.945 SO libspdk_notify.so.6.0 00:05:22.206 LIB libspdk_keyring.a 00:05:22.206 LIB libspdk_trace.a 00:05:22.206 SO libspdk_keyring.so.2.0 00:05:22.206 SYMLINK libspdk_notify.so 00:05:22.206 SO libspdk_trace.so.11.0 00:05:22.206 SYMLINK libspdk_keyring.so 00:05:22.206 SYMLINK libspdk_trace.so 00:05:22.467 CC lib/thread/thread.o 00:05:22.467 CC lib/thread/iobuf.o 00:05:22.467 CC lib/sock/sock.o 00:05:22.467 CC lib/sock/sock_rpc.o 00:05:23.037 LIB libspdk_sock.a 00:05:23.037 SO libspdk_sock.so.10.0 00:05:23.037 SYMLINK libspdk_sock.so 00:05:23.297 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:23.297 CC lib/nvme/nvme_ctrlr.o 00:05:23.297 CC lib/nvme/nvme_fabric.o 00:05:23.297 CC lib/nvme/nvme_ns_cmd.o 00:05:23.297 CC lib/nvme/nvme_ns.o 00:05:23.297 CC lib/nvme/nvme_pcie_common.o 00:05:23.297 CC lib/nvme/nvme_pcie.o 00:05:23.297 CC lib/nvme/nvme_qpair.o 00:05:23.297 CC lib/nvme/nvme.o 00:05:23.297 CC lib/nvme/nvme_quirks.o 00:05:23.297 CC lib/nvme/nvme_transport.o 00:05:23.297 CC lib/nvme/nvme_discovery.o 00:05:23.297 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:23.297 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:23.297 CC lib/nvme/nvme_tcp.o 00:05:23.297 CC lib/nvme/nvme_opal.o 00:05:23.297 CC lib/nvme/nvme_io_msg.o 00:05:23.297 CC lib/nvme/nvme_poll_group.o 00:05:23.297 CC lib/nvme/nvme_zns.o 00:05:23.297 CC lib/nvme/nvme_stubs.o 00:05:23.297 CC lib/nvme/nvme_auth.o 00:05:23.297 CC lib/nvme/nvme_cuse.o 00:05:23.297 CC lib/nvme/nvme_vfio_user.o 00:05:23.297 CC lib/nvme/nvme_rdma.o 00:05:23.867 LIB libspdk_thread.a 00:05:23.867 SO libspdk_thread.so.11.0 00:05:24.128 SYMLINK libspdk_thread.so 00:05:24.389 CC lib/accel/accel.o 00:05:24.389 CC lib/init/json_config.o 00:05:24.389 CC lib/accel/accel_rpc.o 00:05:24.389 CC lib/init/subsystem_rpc.o 00:05:24.389 CC lib/init/subsystem.o 00:05:24.389 CC lib/accel/accel_sw.o 00:05:24.389 CC lib/init/rpc.o 00:05:24.389 CC lib/fsdev/fsdev.o 00:05:24.389 CC lib/fsdev/fsdev_io.o 00:05:24.389 CC lib/fsdev/fsdev_rpc.o 00:05:24.389 CC lib/vfu_tgt/tgt_endpoint.o 00:05:24.389 CC lib/blob/blobstore.o 00:05:24.389 CC lib/virtio/virtio.o 00:05:24.389 CC lib/vfu_tgt/tgt_rpc.o 00:05:24.389 CC lib/blob/request.o 00:05:24.389 CC lib/virtio/virtio_vhost_user.o 00:05:24.389 CC lib/blob/zeroes.o 00:05:24.389 CC lib/virtio/virtio_vfio_user.o 00:05:24.389 CC lib/blob/blob_bs_dev.o 00:05:24.389 CC lib/virtio/virtio_pci.o 00:05:24.650 LIB libspdk_init.a 00:05:24.650 SO libspdk_init.so.6.0 00:05:24.650 LIB libspdk_virtio.a 00:05:24.650 LIB libspdk_vfu_tgt.a 00:05:24.650 SYMLINK libspdk_init.so 00:05:24.912 SO libspdk_virtio.so.7.0 00:05:24.912 SO libspdk_vfu_tgt.so.3.0 00:05:24.912 SYMLINK libspdk_virtio.so 00:05:24.912 SYMLINK libspdk_vfu_tgt.so 00:05:24.912 LIB libspdk_fsdev.a 00:05:25.173 SO libspdk_fsdev.so.2.0 00:05:25.173 CC lib/event/app.o 00:05:25.173 CC lib/event/reactor.o 00:05:25.173 CC lib/event/log_rpc.o 00:05:25.173 CC lib/event/app_rpc.o 00:05:25.173 CC lib/event/scheduler_static.o 00:05:25.173 SYMLINK libspdk_fsdev.so 00:05:25.434 LIB libspdk_accel.a 00:05:25.434 LIB libspdk_nvme.a 00:05:25.434 SO libspdk_accel.so.16.0 00:05:25.434 SYMLINK libspdk_accel.so 00:05:25.434 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:25.434 SO libspdk_nvme.so.15.0 00:05:25.434 LIB libspdk_event.a 00:05:25.696 SO libspdk_event.so.14.0 00:05:25.696 SYMLINK libspdk_event.so 00:05:25.696 SYMLINK libspdk_nvme.so 00:05:25.958 CC lib/bdev/bdev.o 00:05:25.958 CC lib/bdev/bdev_rpc.o 00:05:25.958 CC lib/bdev/bdev_zone.o 00:05:25.958 CC lib/bdev/part.o 00:05:25.958 CC lib/bdev/scsi_nvme.o 00:05:26.219 LIB libspdk_fuse_dispatcher.a 00:05:26.219 SO libspdk_fuse_dispatcher.so.1.0 00:05:26.219 SYMLINK libspdk_fuse_dispatcher.so 00:05:27.162 LIB libspdk_blob.a 00:05:27.162 SO libspdk_blob.so.11.0 00:05:27.162 SYMLINK libspdk_blob.so 00:05:27.733 CC lib/lvol/lvol.o 00:05:27.733 CC lib/blobfs/blobfs.o 00:05:27.733 CC lib/blobfs/tree.o 00:05:28.306 LIB libspdk_bdev.a 00:05:28.306 SO libspdk_bdev.so.17.0 00:05:28.306 SYMLINK libspdk_bdev.so 00:05:28.306 LIB libspdk_blobfs.a 00:05:28.306 SO libspdk_blobfs.so.10.0 00:05:28.567 LIB libspdk_lvol.a 00:05:28.567 SYMLINK libspdk_blobfs.so 00:05:28.567 SO libspdk_lvol.so.10.0 00:05:28.567 SYMLINK libspdk_lvol.so 00:05:28.567 CC lib/nbd/nbd.o 00:05:28.567 CC lib/scsi/dev.o 00:05:28.567 CC lib/ublk/ublk.o 00:05:28.567 CC lib/nvmf/ctrlr.o 00:05:28.567 CC lib/nbd/nbd_rpc.o 00:05:28.567 CC lib/scsi/lun.o 00:05:28.567 CC lib/ftl/ftl_core.o 00:05:28.567 CC lib/ublk/ublk_rpc.o 00:05:28.567 CC lib/scsi/port.o 00:05:28.567 CC lib/nvmf/ctrlr_discovery.o 00:05:28.567 CC lib/scsi/scsi.o 00:05:28.567 CC lib/nvmf/ctrlr_bdev.o 00:05:28.567 CC lib/ftl/ftl_init.o 00:05:28.567 CC lib/scsi/scsi_bdev.o 00:05:28.567 CC lib/nvmf/subsystem.o 00:05:28.567 CC lib/ftl/ftl_layout.o 00:05:28.567 CC lib/nvmf/nvmf.o 00:05:28.567 CC lib/scsi/scsi_pr.o 00:05:28.567 CC lib/ftl/ftl_debug.o 00:05:28.567 CC lib/nvmf/nvmf_rpc.o 00:05:28.567 CC lib/scsi/scsi_rpc.o 00:05:28.567 CC lib/ftl/ftl_io.o 00:05:28.567 CC lib/scsi/task.o 00:05:28.830 CC lib/nvmf/transport.o 00:05:28.830 CC lib/ftl/ftl_sb.o 00:05:28.830 CC lib/nvmf/tcp.o 00:05:28.830 CC lib/ftl/ftl_l2p.o 00:05:28.830 CC lib/nvmf/stubs.o 00:05:28.830 CC lib/ftl/ftl_l2p_flat.o 00:05:28.830 CC lib/nvmf/mdns_server.o 00:05:28.830 CC lib/ftl/ftl_nv_cache.o 00:05:28.830 CC lib/nvmf/vfio_user.o 00:05:28.830 CC lib/ftl/ftl_band.o 00:05:28.830 CC lib/nvmf/rdma.o 00:05:28.830 CC lib/nvmf/auth.o 00:05:28.830 CC lib/ftl/ftl_band_ops.o 00:05:28.830 CC lib/ftl/ftl_writer.o 00:05:28.830 CC lib/ftl/ftl_rq.o 00:05:28.830 CC lib/ftl/ftl_reloc.o 00:05:28.830 CC lib/ftl/ftl_l2p_cache.o 00:05:28.830 CC lib/ftl/ftl_p2l.o 00:05:28.830 CC lib/ftl/ftl_p2l_log.o 00:05:28.830 CC lib/ftl/mngt/ftl_mngt.o 00:05:28.830 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:28.830 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:28.830 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:28.830 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:28.830 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:28.830 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:28.830 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:28.830 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:28.830 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:28.830 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:28.830 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:28.830 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:28.830 CC lib/ftl/utils/ftl_conf.o 00:05:28.830 CC lib/ftl/utils/ftl_md.o 00:05:28.830 CC lib/ftl/utils/ftl_mempool.o 00:05:28.830 CC lib/ftl/utils/ftl_bitmap.o 00:05:28.830 CC lib/ftl/utils/ftl_property.o 00:05:28.830 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:28.830 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:28.830 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:28.830 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:28.830 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:28.830 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:28.830 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:28.830 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:28.830 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:28.830 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:28.830 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:28.830 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:28.830 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:28.830 CC lib/ftl/base/ftl_base_bdev.o 00:05:28.830 CC lib/ftl/base/ftl_base_dev.o 00:05:28.830 CC lib/ftl/ftl_trace.o 00:05:29.400 LIB libspdk_nbd.a 00:05:29.400 SO libspdk_nbd.so.7.0 00:05:29.400 LIB libspdk_scsi.a 00:05:29.400 SYMLINK libspdk_nbd.so 00:05:29.400 SO libspdk_scsi.so.9.0 00:05:29.661 LIB libspdk_ublk.a 00:05:29.661 SYMLINK libspdk_scsi.so 00:05:29.661 SO libspdk_ublk.so.3.0 00:05:29.661 SYMLINK libspdk_ublk.so 00:05:29.922 LIB libspdk_ftl.a 00:05:29.922 CC lib/iscsi/conn.o 00:05:29.922 CC lib/iscsi/init_grp.o 00:05:29.922 CC lib/iscsi/iscsi.o 00:05:29.922 CC lib/iscsi/param.o 00:05:29.922 CC lib/iscsi/portal_grp.o 00:05:29.922 CC lib/iscsi/tgt_node.o 00:05:29.922 CC lib/iscsi/iscsi_subsystem.o 00:05:29.922 CC lib/iscsi/iscsi_rpc.o 00:05:29.922 CC lib/iscsi/task.o 00:05:29.922 CC lib/vhost/vhost.o 00:05:29.922 CC lib/vhost/vhost_rpc.o 00:05:29.922 CC lib/vhost/vhost_scsi.o 00:05:29.922 CC lib/vhost/vhost_blk.o 00:05:29.922 CC lib/vhost/rte_vhost_user.o 00:05:30.183 SO libspdk_ftl.so.9.0 00:05:30.462 SYMLINK libspdk_ftl.so 00:05:30.724 LIB libspdk_nvmf.a 00:05:30.986 SO libspdk_nvmf.so.20.0 00:05:30.986 LIB libspdk_vhost.a 00:05:30.986 SO libspdk_vhost.so.8.0 00:05:30.986 SYMLINK libspdk_nvmf.so 00:05:31.248 SYMLINK libspdk_vhost.so 00:05:31.248 LIB libspdk_iscsi.a 00:05:31.248 SO libspdk_iscsi.so.8.0 00:05:31.508 SYMLINK libspdk_iscsi.so 00:05:32.079 CC module/env_dpdk/env_dpdk_rpc.o 00:05:32.079 CC module/vfu_device/vfu_virtio.o 00:05:32.079 CC module/vfu_device/vfu_virtio_blk.o 00:05:32.079 CC module/vfu_device/vfu_virtio_scsi.o 00:05:32.079 CC module/vfu_device/vfu_virtio_rpc.o 00:05:32.079 CC module/vfu_device/vfu_virtio_fs.o 00:05:32.079 LIB libspdk_env_dpdk_rpc.a 00:05:32.079 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:32.079 CC module/accel/dsa/accel_dsa.o 00:05:32.079 CC module/keyring/linux/keyring.o 00:05:32.079 CC module/accel/error/accel_error.o 00:05:32.079 CC module/accel/dsa/accel_dsa_rpc.o 00:05:32.079 CC module/keyring/linux/keyring_rpc.o 00:05:32.079 CC module/accel/error/accel_error_rpc.o 00:05:32.079 CC module/sock/posix/posix.o 00:05:32.079 CC module/scheduler/gscheduler/gscheduler.o 00:05:32.079 CC module/keyring/file/keyring.o 00:05:32.079 CC module/accel/ioat/accel_ioat.o 00:05:32.079 CC module/keyring/file/keyring_rpc.o 00:05:32.079 CC module/fsdev/aio/fsdev_aio.o 00:05:32.079 CC module/accel/ioat/accel_ioat_rpc.o 00:05:32.079 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:32.079 CC module/fsdev/aio/linux_aio_mgr.o 00:05:32.341 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:32.341 CC module/accel/iaa/accel_iaa.o 00:05:32.341 CC module/blob/bdev/blob_bdev.o 00:05:32.341 CC module/accel/iaa/accel_iaa_rpc.o 00:05:32.341 SO libspdk_env_dpdk_rpc.so.6.0 00:05:32.341 SYMLINK libspdk_env_dpdk_rpc.so 00:05:32.341 LIB libspdk_keyring_linux.a 00:05:32.341 LIB libspdk_keyring_file.a 00:05:32.341 LIB libspdk_scheduler_gscheduler.a 00:05:32.341 LIB libspdk_scheduler_dpdk_governor.a 00:05:32.341 SO libspdk_scheduler_gscheduler.so.4.0 00:05:32.341 SO libspdk_keyring_linux.so.1.0 00:05:32.341 SO libspdk_keyring_file.so.2.0 00:05:32.341 LIB libspdk_accel_ioat.a 00:05:32.341 LIB libspdk_scheduler_dynamic.a 00:05:32.341 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:32.341 LIB libspdk_accel_error.a 00:05:32.602 SO libspdk_accel_ioat.so.6.0 00:05:32.602 LIB libspdk_accel_iaa.a 00:05:32.602 SO libspdk_scheduler_dynamic.so.4.0 00:05:32.602 SYMLINK libspdk_scheduler_gscheduler.so 00:05:32.602 SO libspdk_accel_error.so.2.0 00:05:32.602 SYMLINK libspdk_keyring_linux.so 00:05:32.602 SYMLINK libspdk_keyring_file.so 00:05:32.602 SO libspdk_accel_iaa.so.3.0 00:05:32.602 LIB libspdk_accel_dsa.a 00:05:32.602 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:32.602 LIB libspdk_blob_bdev.a 00:05:32.602 SYMLINK libspdk_accel_ioat.so 00:05:32.602 SYMLINK libspdk_scheduler_dynamic.so 00:05:32.602 SO libspdk_accel_dsa.so.5.0 00:05:32.602 SO libspdk_blob_bdev.so.11.0 00:05:32.602 SYMLINK libspdk_accel_error.so 00:05:32.602 SYMLINK libspdk_accel_iaa.so 00:05:32.602 LIB libspdk_vfu_device.a 00:05:32.602 SO libspdk_vfu_device.so.3.0 00:05:32.602 SYMLINK libspdk_accel_dsa.so 00:05:32.602 SYMLINK libspdk_blob_bdev.so 00:05:32.863 SYMLINK libspdk_vfu_device.so 00:05:32.863 LIB libspdk_fsdev_aio.a 00:05:32.863 SO libspdk_fsdev_aio.so.1.0 00:05:32.863 LIB libspdk_sock_posix.a 00:05:32.863 SO libspdk_sock_posix.so.6.0 00:05:33.124 SYMLINK libspdk_fsdev_aio.so 00:05:33.124 SYMLINK libspdk_sock_posix.so 00:05:33.124 CC module/bdev/error/vbdev_error.o 00:05:33.124 CC module/bdev/error/vbdev_error_rpc.o 00:05:33.124 CC module/blobfs/bdev/blobfs_bdev.o 00:05:33.124 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:33.124 CC module/bdev/gpt/gpt.o 00:05:33.125 CC module/bdev/gpt/vbdev_gpt.o 00:05:33.125 CC module/bdev/malloc/bdev_malloc.o 00:05:33.125 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:33.125 CC module/bdev/delay/vbdev_delay.o 00:05:33.125 CC module/bdev/null/bdev_null.o 00:05:33.125 CC module/bdev/null/bdev_null_rpc.o 00:05:33.125 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:33.125 CC module/bdev/ftl/bdev_ftl.o 00:05:33.125 CC module/bdev/nvme/bdev_nvme.o 00:05:33.125 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:33.125 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:33.125 CC module/bdev/nvme/nvme_rpc.o 00:05:33.125 CC module/bdev/nvme/bdev_mdns_client.o 00:05:33.125 CC module/bdev/iscsi/bdev_iscsi.o 00:05:33.125 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:33.125 CC module/bdev/nvme/vbdev_opal.o 00:05:33.125 CC module/bdev/raid/bdev_raid.o 00:05:33.125 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:33.125 CC module/bdev/raid/bdev_raid_rpc.o 00:05:33.125 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:33.125 CC module/bdev/lvol/vbdev_lvol.o 00:05:33.125 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:33.125 CC module/bdev/raid/bdev_raid_sb.o 00:05:33.125 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:33.125 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:33.125 CC module/bdev/raid/raid0.o 00:05:33.125 CC module/bdev/aio/bdev_aio.o 00:05:33.125 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:33.125 CC module/bdev/aio/bdev_aio_rpc.o 00:05:33.125 CC module/bdev/raid/raid1.o 00:05:33.125 CC module/bdev/raid/concat.o 00:05:33.125 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:33.125 CC module/bdev/split/vbdev_split.o 00:05:33.125 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:33.125 CC module/bdev/passthru/vbdev_passthru.o 00:05:33.125 CC module/bdev/split/vbdev_split_rpc.o 00:05:33.125 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:33.385 LIB libspdk_blobfs_bdev.a 00:05:33.645 SO libspdk_blobfs_bdev.so.6.0 00:05:33.645 LIB libspdk_bdev_error.a 00:05:33.645 LIB libspdk_bdev_gpt.a 00:05:33.645 LIB libspdk_bdev_null.a 00:05:33.645 SYMLINK libspdk_blobfs_bdev.so 00:05:33.645 SO libspdk_bdev_error.so.6.0 00:05:33.645 SO libspdk_bdev_gpt.so.6.0 00:05:33.645 LIB libspdk_bdev_split.a 00:05:33.645 SO libspdk_bdev_null.so.6.0 00:05:33.645 LIB libspdk_bdev_ftl.a 00:05:33.645 LIB libspdk_bdev_passthru.a 00:05:33.645 LIB libspdk_bdev_malloc.a 00:05:33.645 SO libspdk_bdev_split.so.6.0 00:05:33.645 SYMLINK libspdk_bdev_error.so 00:05:33.645 SO libspdk_bdev_ftl.so.6.0 00:05:33.645 SO libspdk_bdev_passthru.so.6.0 00:05:33.645 LIB libspdk_bdev_delay.a 00:05:33.645 SYMLINK libspdk_bdev_null.so 00:05:33.645 SYMLINK libspdk_bdev_gpt.so 00:05:33.645 LIB libspdk_bdev_iscsi.a 00:05:33.645 LIB libspdk_bdev_aio.a 00:05:33.645 LIB libspdk_bdev_zone_block.a 00:05:33.645 SO libspdk_bdev_malloc.so.6.0 00:05:33.645 SO libspdk_bdev_delay.so.6.0 00:05:33.645 SO libspdk_bdev_iscsi.so.6.0 00:05:33.645 SO libspdk_bdev_aio.so.6.0 00:05:33.645 SYMLINK libspdk_bdev_split.so 00:05:33.645 SYMLINK libspdk_bdev_ftl.so 00:05:33.645 SO libspdk_bdev_zone_block.so.6.0 00:05:33.645 SYMLINK libspdk_bdev_passthru.so 00:05:33.906 SYMLINK libspdk_bdev_malloc.so 00:05:33.906 SYMLINK libspdk_bdev_delay.so 00:05:33.906 SYMLINK libspdk_bdev_iscsi.so 00:05:33.906 SYMLINK libspdk_bdev_aio.so 00:05:33.906 SYMLINK libspdk_bdev_zone_block.so 00:05:33.906 LIB libspdk_bdev_lvol.a 00:05:33.906 LIB libspdk_bdev_virtio.a 00:05:33.906 SO libspdk_bdev_lvol.so.6.0 00:05:33.906 SO libspdk_bdev_virtio.so.6.0 00:05:33.906 SYMLINK libspdk_bdev_lvol.so 00:05:33.906 SYMLINK libspdk_bdev_virtio.so 00:05:34.167 LIB libspdk_bdev_raid.a 00:05:34.428 SO libspdk_bdev_raid.so.6.0 00:05:34.428 SYMLINK libspdk_bdev_raid.so 00:05:35.815 LIB libspdk_bdev_nvme.a 00:05:35.815 SO libspdk_bdev_nvme.so.7.1 00:05:35.815 SYMLINK libspdk_bdev_nvme.so 00:05:36.388 CC module/event/subsystems/iobuf/iobuf.o 00:05:36.388 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:36.388 CC module/event/subsystems/vmd/vmd.o 00:05:36.388 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:36.388 CC module/event/subsystems/keyring/keyring.o 00:05:36.388 CC module/event/subsystems/fsdev/fsdev.o 00:05:36.388 CC module/event/subsystems/sock/sock.o 00:05:36.388 CC module/event/subsystems/scheduler/scheduler.o 00:05:36.388 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:36.388 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:36.649 LIB libspdk_event_vfu_tgt.a 00:05:36.649 LIB libspdk_event_keyring.a 00:05:36.649 LIB libspdk_event_vmd.a 00:05:36.649 LIB libspdk_event_scheduler.a 00:05:36.649 LIB libspdk_event_iobuf.a 00:05:36.649 LIB libspdk_event_fsdev.a 00:05:36.649 LIB libspdk_event_vhost_blk.a 00:05:36.649 LIB libspdk_event_sock.a 00:05:36.649 SO libspdk_event_keyring.so.1.0 00:05:36.649 SO libspdk_event_vfu_tgt.so.3.0 00:05:36.649 SO libspdk_event_vmd.so.6.0 00:05:36.649 SO libspdk_event_fsdev.so.1.0 00:05:36.649 SO libspdk_event_scheduler.so.4.0 00:05:36.649 SO libspdk_event_iobuf.so.3.0 00:05:36.649 SO libspdk_event_vhost_blk.so.3.0 00:05:36.649 SO libspdk_event_sock.so.5.0 00:05:36.649 SYMLINK libspdk_event_keyring.so 00:05:36.649 SYMLINK libspdk_event_vfu_tgt.so 00:05:36.649 SYMLINK libspdk_event_scheduler.so 00:05:36.649 SYMLINK libspdk_event_fsdev.so 00:05:36.649 SYMLINK libspdk_event_vmd.so 00:05:36.649 SYMLINK libspdk_event_vhost_blk.so 00:05:36.649 SYMLINK libspdk_event_sock.so 00:05:36.649 SYMLINK libspdk_event_iobuf.so 00:05:37.220 CC module/event/subsystems/accel/accel.o 00:05:37.220 LIB libspdk_event_accel.a 00:05:37.220 SO libspdk_event_accel.so.6.0 00:05:37.480 SYMLINK libspdk_event_accel.so 00:05:37.740 CC module/event/subsystems/bdev/bdev.o 00:05:38.001 LIB libspdk_event_bdev.a 00:05:38.001 SO libspdk_event_bdev.so.6.0 00:05:38.001 SYMLINK libspdk_event_bdev.so 00:05:38.262 CC module/event/subsystems/nbd/nbd.o 00:05:38.262 CC module/event/subsystems/scsi/scsi.o 00:05:38.262 CC module/event/subsystems/ublk/ublk.o 00:05:38.262 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:38.262 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:38.523 LIB libspdk_event_nbd.a 00:05:38.523 LIB libspdk_event_ublk.a 00:05:38.523 LIB libspdk_event_scsi.a 00:05:38.523 SO libspdk_event_nbd.so.6.0 00:05:38.523 SO libspdk_event_scsi.so.6.0 00:05:38.523 SO libspdk_event_ublk.so.3.0 00:05:38.523 LIB libspdk_event_nvmf.a 00:05:38.523 SYMLINK libspdk_event_nbd.so 00:05:38.523 SYMLINK libspdk_event_scsi.so 00:05:38.783 SO libspdk_event_nvmf.so.6.0 00:05:38.783 SYMLINK libspdk_event_ublk.so 00:05:38.783 SYMLINK libspdk_event_nvmf.so 00:05:39.044 CC module/event/subsystems/iscsi/iscsi.o 00:05:39.044 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:39.044 LIB libspdk_event_iscsi.a 00:05:39.305 LIB libspdk_event_vhost_scsi.a 00:05:39.305 SO libspdk_event_iscsi.so.6.0 00:05:39.305 SO libspdk_event_vhost_scsi.so.3.0 00:05:39.305 SYMLINK libspdk_event_iscsi.so 00:05:39.305 SYMLINK libspdk_event_vhost_scsi.so 00:05:39.566 SO libspdk.so.6.0 00:05:39.566 SYMLINK libspdk.so 00:05:39.827 CC app/trace_record/trace_record.o 00:05:39.827 CXX app/trace/trace.o 00:05:39.827 CC app/spdk_top/spdk_top.o 00:05:39.827 CC app/spdk_nvme_discover/discovery_aer.o 00:05:39.827 CC app/spdk_nvme_perf/perf.o 00:05:39.827 CC app/spdk_nvme_identify/identify.o 00:05:39.827 CC app/spdk_lspci/spdk_lspci.o 00:05:39.827 TEST_HEADER include/spdk/accel.h 00:05:39.827 TEST_HEADER include/spdk/accel_module.h 00:05:39.827 CC test/rpc_client/rpc_client_test.o 00:05:39.827 TEST_HEADER include/spdk/assert.h 00:05:39.827 TEST_HEADER include/spdk/barrier.h 00:05:39.827 TEST_HEADER include/spdk/base64.h 00:05:39.827 TEST_HEADER include/spdk/bdev.h 00:05:39.827 TEST_HEADER include/spdk/bdev_module.h 00:05:39.827 TEST_HEADER include/spdk/bdev_zone.h 00:05:39.827 TEST_HEADER include/spdk/bit_array.h 00:05:39.827 TEST_HEADER include/spdk/bit_pool.h 00:05:39.827 TEST_HEADER include/spdk/blob_bdev.h 00:05:39.827 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:39.827 TEST_HEADER include/spdk/blobfs.h 00:05:39.827 TEST_HEADER include/spdk/blob.h 00:05:39.827 TEST_HEADER include/spdk/conf.h 00:05:39.827 TEST_HEADER include/spdk/config.h 00:05:39.827 TEST_HEADER include/spdk/cpuset.h 00:05:39.827 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:39.827 TEST_HEADER include/spdk/crc16.h 00:05:39.827 TEST_HEADER include/spdk/crc32.h 00:05:39.827 TEST_HEADER include/spdk/crc64.h 00:05:39.827 TEST_HEADER include/spdk/dif.h 00:05:39.827 TEST_HEADER include/spdk/dma.h 00:05:39.827 TEST_HEADER include/spdk/endian.h 00:05:39.827 TEST_HEADER include/spdk/env_dpdk.h 00:05:39.827 TEST_HEADER include/spdk/env.h 00:05:39.827 TEST_HEADER include/spdk/event.h 00:05:39.827 TEST_HEADER include/spdk/fd_group.h 00:05:39.827 TEST_HEADER include/spdk/fd.h 00:05:39.827 TEST_HEADER include/spdk/file.h 00:05:39.827 CC app/spdk_dd/spdk_dd.o 00:05:39.827 TEST_HEADER include/spdk/fsdev.h 00:05:39.827 TEST_HEADER include/spdk/ftl.h 00:05:39.827 TEST_HEADER include/spdk/fsdev_module.h 00:05:39.827 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:40.089 TEST_HEADER include/spdk/gpt_spec.h 00:05:40.089 TEST_HEADER include/spdk/hexlify.h 00:05:40.089 CC app/nvmf_tgt/nvmf_main.o 00:05:40.089 TEST_HEADER include/spdk/histogram_data.h 00:05:40.089 TEST_HEADER include/spdk/idxd.h 00:05:40.089 TEST_HEADER include/spdk/init.h 00:05:40.089 TEST_HEADER include/spdk/idxd_spec.h 00:05:40.089 TEST_HEADER include/spdk/ioat.h 00:05:40.089 TEST_HEADER include/spdk/ioat_spec.h 00:05:40.089 TEST_HEADER include/spdk/iscsi_spec.h 00:05:40.089 TEST_HEADER include/spdk/jsonrpc.h 00:05:40.089 TEST_HEADER include/spdk/json.h 00:05:40.089 TEST_HEADER include/spdk/keyring.h 00:05:40.089 TEST_HEADER include/spdk/keyring_module.h 00:05:40.089 TEST_HEADER include/spdk/likely.h 00:05:40.089 TEST_HEADER include/spdk/log.h 00:05:40.089 TEST_HEADER include/spdk/lvol.h 00:05:40.089 TEST_HEADER include/spdk/md5.h 00:05:40.089 TEST_HEADER include/spdk/memory.h 00:05:40.089 TEST_HEADER include/spdk/nbd.h 00:05:40.089 TEST_HEADER include/spdk/mmio.h 00:05:40.089 TEST_HEADER include/spdk/net.h 00:05:40.089 TEST_HEADER include/spdk/notify.h 00:05:40.089 TEST_HEADER include/spdk/nvme.h 00:05:40.089 TEST_HEADER include/spdk/nvme_intel.h 00:05:40.089 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:40.089 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:40.089 TEST_HEADER include/spdk/nvme_zns.h 00:05:40.089 TEST_HEADER include/spdk/nvme_spec.h 00:05:40.089 CC app/iscsi_tgt/iscsi_tgt.o 00:05:40.089 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:40.089 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:40.089 TEST_HEADER include/spdk/nvmf.h 00:05:40.089 TEST_HEADER include/spdk/nvmf_spec.h 00:05:40.089 TEST_HEADER include/spdk/nvmf_transport.h 00:05:40.089 TEST_HEADER include/spdk/opal_spec.h 00:05:40.089 TEST_HEADER include/spdk/opal.h 00:05:40.089 TEST_HEADER include/spdk/pci_ids.h 00:05:40.089 TEST_HEADER include/spdk/queue.h 00:05:40.089 TEST_HEADER include/spdk/pipe.h 00:05:40.089 TEST_HEADER include/spdk/reduce.h 00:05:40.089 TEST_HEADER include/spdk/rpc.h 00:05:40.089 TEST_HEADER include/spdk/scheduler.h 00:05:40.089 TEST_HEADER include/spdk/scsi.h 00:05:40.089 TEST_HEADER include/spdk/scsi_spec.h 00:05:40.089 TEST_HEADER include/spdk/sock.h 00:05:40.089 TEST_HEADER include/spdk/stdinc.h 00:05:40.089 TEST_HEADER include/spdk/string.h 00:05:40.089 TEST_HEADER include/spdk/thread.h 00:05:40.089 TEST_HEADER include/spdk/trace.h 00:05:40.089 TEST_HEADER include/spdk/tree.h 00:05:40.089 TEST_HEADER include/spdk/trace_parser.h 00:05:40.089 CC app/spdk_tgt/spdk_tgt.o 00:05:40.089 TEST_HEADER include/spdk/ublk.h 00:05:40.089 TEST_HEADER include/spdk/util.h 00:05:40.089 TEST_HEADER include/spdk/uuid.h 00:05:40.089 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:40.089 TEST_HEADER include/spdk/version.h 00:05:40.089 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:40.089 TEST_HEADER include/spdk/vmd.h 00:05:40.089 TEST_HEADER include/spdk/vhost.h 00:05:40.089 TEST_HEADER include/spdk/xor.h 00:05:40.089 TEST_HEADER include/spdk/zipf.h 00:05:40.089 CXX test/cpp_headers/accel.o 00:05:40.089 CXX test/cpp_headers/accel_module.o 00:05:40.089 CXX test/cpp_headers/assert.o 00:05:40.089 CXX test/cpp_headers/barrier.o 00:05:40.089 CXX test/cpp_headers/base64.o 00:05:40.089 CXX test/cpp_headers/bdev.o 00:05:40.089 CXX test/cpp_headers/bdev_zone.o 00:05:40.089 CXX test/cpp_headers/bdev_module.o 00:05:40.089 CXX test/cpp_headers/bit_array.o 00:05:40.089 CXX test/cpp_headers/blobfs_bdev.o 00:05:40.089 CXX test/cpp_headers/bit_pool.o 00:05:40.089 CXX test/cpp_headers/blob_bdev.o 00:05:40.089 CXX test/cpp_headers/blobfs.o 00:05:40.089 CXX test/cpp_headers/blob.o 00:05:40.089 CXX test/cpp_headers/conf.o 00:05:40.089 CXX test/cpp_headers/config.o 00:05:40.089 CXX test/cpp_headers/cpuset.o 00:05:40.089 CXX test/cpp_headers/crc16.o 00:05:40.089 CXX test/cpp_headers/crc32.o 00:05:40.089 CXX test/cpp_headers/crc64.o 00:05:40.089 CXX test/cpp_headers/dif.o 00:05:40.089 CXX test/cpp_headers/dma.o 00:05:40.089 CXX test/cpp_headers/env_dpdk.o 00:05:40.089 CXX test/cpp_headers/endian.o 00:05:40.089 CXX test/cpp_headers/env.o 00:05:40.089 CXX test/cpp_headers/event.o 00:05:40.089 CXX test/cpp_headers/fd.o 00:05:40.089 CXX test/cpp_headers/fd_group.o 00:05:40.089 CXX test/cpp_headers/file.o 00:05:40.089 CXX test/cpp_headers/fsdev.o 00:05:40.089 CXX test/cpp_headers/ftl.o 00:05:40.089 CXX test/cpp_headers/fsdev_module.o 00:05:40.089 CXX test/cpp_headers/fuse_dispatcher.o 00:05:40.089 CXX test/cpp_headers/gpt_spec.o 00:05:40.089 CXX test/cpp_headers/hexlify.o 00:05:40.089 CXX test/cpp_headers/histogram_data.o 00:05:40.089 CXX test/cpp_headers/idxd_spec.o 00:05:40.089 CXX test/cpp_headers/idxd.o 00:05:40.089 CXX test/cpp_headers/init.o 00:05:40.089 CXX test/cpp_headers/ioat.o 00:05:40.089 CXX test/cpp_headers/ioat_spec.o 00:05:40.089 CXX test/cpp_headers/json.o 00:05:40.089 CXX test/cpp_headers/jsonrpc.o 00:05:40.089 CXX test/cpp_headers/iscsi_spec.o 00:05:40.089 CXX test/cpp_headers/keyring_module.o 00:05:40.089 CXX test/cpp_headers/keyring.o 00:05:40.089 CXX test/cpp_headers/log.o 00:05:40.089 CXX test/cpp_headers/likely.o 00:05:40.089 CXX test/cpp_headers/lvol.o 00:05:40.089 CXX test/cpp_headers/memory.o 00:05:40.089 CXX test/cpp_headers/mmio.o 00:05:40.089 CXX test/cpp_headers/md5.o 00:05:40.089 CXX test/cpp_headers/nbd.o 00:05:40.089 CC examples/util/zipf/zipf.o 00:05:40.089 CXX test/cpp_headers/nvme_ocssd.o 00:05:40.089 CXX test/cpp_headers/net.o 00:05:40.089 CC examples/ioat/verify/verify.o 00:05:40.089 CXX test/cpp_headers/nvme.o 00:05:40.089 CXX test/cpp_headers/notify.o 00:05:40.089 CXX test/cpp_headers/nvme_intel.o 00:05:40.089 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:40.089 CXX test/cpp_headers/nvmf_cmd.o 00:05:40.089 CXX test/cpp_headers/nvmf.o 00:05:40.089 CXX test/cpp_headers/nvme_spec.o 00:05:40.089 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:40.089 CXX test/cpp_headers/nvmf_spec.o 00:05:40.089 CXX test/cpp_headers/nvme_zns.o 00:05:40.089 CXX test/cpp_headers/nvmf_transport.o 00:05:40.089 CXX test/cpp_headers/opal.o 00:05:40.089 CXX test/cpp_headers/pci_ids.o 00:05:40.089 CC examples/ioat/perf/perf.o 00:05:40.089 CXX test/cpp_headers/opal_spec.o 00:05:40.089 CXX test/cpp_headers/pipe.o 00:05:40.089 CXX test/cpp_headers/queue.o 00:05:40.089 CXX test/cpp_headers/reduce.o 00:05:40.089 CXX test/cpp_headers/rpc.o 00:05:40.089 CXX test/cpp_headers/scsi.o 00:05:40.089 CC app/fio/nvme/fio_plugin.o 00:05:40.090 CC test/app/jsoncat/jsoncat.o 00:05:40.090 CXX test/cpp_headers/scheduler.o 00:05:40.090 CXX test/cpp_headers/string.o 00:05:40.090 CXX test/cpp_headers/scsi_spec.o 00:05:40.090 CC test/app/stub/stub.o 00:05:40.090 CXX test/cpp_headers/sock.o 00:05:40.090 CC test/app/histogram_perf/histogram_perf.o 00:05:40.090 CXX test/cpp_headers/stdinc.o 00:05:40.090 CC test/thread/poller_perf/poller_perf.o 00:05:40.090 CXX test/cpp_headers/trace.o 00:05:40.090 CXX test/cpp_headers/trace_parser.o 00:05:40.090 CXX test/cpp_headers/thread.o 00:05:40.090 CXX test/cpp_headers/tree.o 00:05:40.090 LINK spdk_lspci 00:05:40.090 CXX test/cpp_headers/ublk.o 00:05:40.090 CXX test/cpp_headers/util.o 00:05:40.090 CC test/env/pci/pci_ut.o 00:05:40.090 CC test/env/vtophys/vtophys.o 00:05:40.090 CXX test/cpp_headers/uuid.o 00:05:40.355 CXX test/cpp_headers/vfio_user_pci.o 00:05:40.355 CXX test/cpp_headers/version.o 00:05:40.355 CXX test/cpp_headers/vfio_user_spec.o 00:05:40.355 CXX test/cpp_headers/xor.o 00:05:40.355 CXX test/cpp_headers/vhost.o 00:05:40.355 CXX test/cpp_headers/vmd.o 00:05:40.355 CXX test/cpp_headers/zipf.o 00:05:40.355 CC test/env/memory/memory_ut.o 00:05:40.355 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:40.355 CC test/app/bdev_svc/bdev_svc.o 00:05:40.355 CC app/fio/bdev/fio_plugin.o 00:05:40.355 LINK rpc_client_test 00:05:40.355 LINK spdk_nvme_discover 00:05:40.355 CC test/dma/test_dma/test_dma.o 00:05:40.355 LINK spdk_trace_record 00:05:40.355 LINK nvmf_tgt 00:05:40.623 LINK interrupt_tgt 00:05:40.623 LINK iscsi_tgt 00:05:40.885 LINK histogram_perf 00:05:40.885 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:40.885 LINK verify 00:05:40.885 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:40.885 CC test/env/mem_callbacks/mem_callbacks.o 00:05:40.885 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:40.885 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:40.885 LINK spdk_tgt 00:05:40.885 LINK spdk_trace 00:05:40.885 LINK spdk_dd 00:05:41.145 LINK jsoncat 00:05:41.145 LINK zipf 00:05:41.145 LINK bdev_svc 00:05:41.145 LINK vtophys 00:05:41.145 LINK poller_perf 00:05:41.407 LINK env_dpdk_post_init 00:05:41.407 LINK stub 00:05:41.407 LINK ioat_perf 00:05:41.407 CC app/vhost/vhost.o 00:05:41.407 LINK pci_ut 00:05:41.669 LINK spdk_top 00:05:41.669 LINK test_dma 00:05:41.669 LINK spdk_nvme_identify 00:05:41.669 LINK nvme_fuzz 00:05:41.669 LINK vhost_fuzz 00:05:41.669 CC examples/idxd/perf/perf.o 00:05:41.669 CC examples/sock/hello_world/hello_sock.o 00:05:41.669 LINK spdk_bdev 00:05:41.669 CC examples/vmd/led/led.o 00:05:41.669 CC examples/vmd/lsvmd/lsvmd.o 00:05:41.669 LINK spdk_nvme 00:05:41.669 CC examples/thread/thread/thread_ex.o 00:05:41.669 LINK vhost 00:05:41.669 LINK mem_callbacks 00:05:41.930 LINK spdk_nvme_perf 00:05:41.931 CC test/event/reactor_perf/reactor_perf.o 00:05:41.931 CC test/event/reactor/reactor.o 00:05:41.931 CC test/event/event_perf/event_perf.o 00:05:41.931 LINK lsvmd 00:05:41.931 CC test/event/app_repeat/app_repeat.o 00:05:41.931 LINK led 00:05:41.931 CC test/event/scheduler/scheduler.o 00:05:41.931 LINK hello_sock 00:05:41.931 LINK reactor_perf 00:05:41.931 LINK idxd_perf 00:05:41.931 LINK reactor 00:05:41.931 LINK thread 00:05:41.931 LINK event_perf 00:05:41.931 LINK memory_ut 00:05:42.193 LINK app_repeat 00:05:42.193 LINK scheduler 00:05:42.193 CC test/nvme/aer/aer.o 00:05:42.193 CC test/nvme/sgl/sgl.o 00:05:42.193 CC test/nvme/reserve/reserve.o 00:05:42.193 CC test/nvme/simple_copy/simple_copy.o 00:05:42.193 CC test/nvme/connect_stress/connect_stress.o 00:05:42.193 CC test/nvme/err_injection/err_injection.o 00:05:42.193 CC test/nvme/e2edp/nvme_dp.o 00:05:42.193 CC test/nvme/reset/reset.o 00:05:42.193 CC test/nvme/boot_partition/boot_partition.o 00:05:42.193 CC test/nvme/fdp/fdp.o 00:05:42.193 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:42.193 CC test/nvme/overhead/overhead.o 00:05:42.193 CC test/nvme/fused_ordering/fused_ordering.o 00:05:42.193 CC test/nvme/compliance/nvme_compliance.o 00:05:42.193 CC test/nvme/cuse/cuse.o 00:05:42.193 CC test/nvme/startup/startup.o 00:05:42.193 CC test/blobfs/mkfs/mkfs.o 00:05:42.193 CC test/accel/dif/dif.o 00:05:42.454 CC test/lvol/esnap/esnap.o 00:05:42.454 LINK boot_partition 00:05:42.454 LINK err_injection 00:05:42.454 LINK connect_stress 00:05:42.454 LINK startup 00:05:42.454 LINK doorbell_aers 00:05:42.454 LINK reserve 00:05:42.454 LINK fused_ordering 00:05:42.454 LINK mkfs 00:05:42.454 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:42.454 CC examples/nvme/abort/abort.o 00:05:42.454 CC examples/nvme/arbitration/arbitration.o 00:05:42.454 CC examples/nvme/reconnect/reconnect.o 00:05:42.454 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:42.454 LINK aer 00:05:42.454 LINK simple_copy 00:05:42.454 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:42.454 LINK sgl 00:05:42.454 CC examples/nvme/hotplug/hotplug.o 00:05:42.454 CC examples/nvme/hello_world/hello_world.o 00:05:42.454 LINK reset 00:05:42.454 LINK nvme_dp 00:05:42.454 LINK overhead 00:05:42.454 LINK nvme_compliance 00:05:42.716 LINK fdp 00:05:42.716 CC examples/accel/perf/accel_perf.o 00:05:42.716 LINK iscsi_fuzz 00:05:42.716 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:42.716 CC examples/blob/cli/blobcli.o 00:05:42.716 CC examples/blob/hello_world/hello_blob.o 00:05:42.716 LINK pmr_persistence 00:05:42.716 LINK cmb_copy 00:05:42.716 LINK hello_world 00:05:42.716 LINK hotplug 00:05:42.978 LINK dif 00:05:42.978 LINK arbitration 00:05:42.978 LINK reconnect 00:05:42.978 LINK abort 00:05:42.978 LINK hello_blob 00:05:42.978 LINK nvme_manage 00:05:42.978 LINK hello_fsdev 00:05:43.239 LINK accel_perf 00:05:43.239 LINK blobcli 00:05:43.500 LINK cuse 00:05:43.500 CC test/bdev/bdevio/bdevio.o 00:05:43.760 CC examples/bdev/hello_world/hello_bdev.o 00:05:43.760 CC examples/bdev/bdevperf/bdevperf.o 00:05:43.760 LINK bdevio 00:05:44.021 LINK hello_bdev 00:05:44.594 LINK bdevperf 00:05:45.165 CC examples/nvmf/nvmf/nvmf.o 00:05:45.426 LINK nvmf 00:05:46.812 LINK esnap 00:05:47.074 00:05:47.074 real 0m56.509s 00:05:47.074 user 8m9.511s 00:05:47.074 sys 5m37.240s 00:05:47.074 10:33:26 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:47.074 10:33:26 make -- common/autotest_common.sh@10 -- $ set +x 00:05:47.074 ************************************ 00:05:47.074 END TEST make 00:05:47.074 ************************************ 00:05:47.074 10:33:26 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:47.074 10:33:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:47.074 10:33:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:47.074 10:33:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:47.074 10:33:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:47.074 10:33:26 -- pm/common@44 -- $ pid=693869 00:05:47.074 10:33:26 -- pm/common@50 -- $ kill -TERM 693869 00:05:47.074 10:33:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:47.074 10:33:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:47.074 10:33:26 -- pm/common@44 -- $ pid=693870 00:05:47.074 10:33:26 -- pm/common@50 -- $ kill -TERM 693870 00:05:47.074 10:33:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:47.074 10:33:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:47.074 10:33:26 -- pm/common@44 -- $ pid=693872 00:05:47.074 10:33:26 -- pm/common@50 -- $ kill -TERM 693872 00:05:47.074 10:33:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:47.074 10:33:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:47.074 10:33:26 -- pm/common@44 -- $ pid=693895 00:05:47.074 10:33:26 -- pm/common@50 -- $ sudo -E kill -TERM 693895 00:05:47.335 10:33:26 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:47.335 10:33:26 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:47.335 10:33:26 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.335 10:33:26 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.335 10:33:26 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.335 10:33:26 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.335 10:33:26 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.335 10:33:26 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.335 10:33:26 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.335 10:33:26 -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.335 10:33:26 -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.335 10:33:26 -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.335 10:33:26 -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.335 10:33:26 -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.335 10:33:26 -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.335 10:33:26 -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.335 10:33:26 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.336 10:33:26 -- scripts/common.sh@344 -- # case "$op" in 00:05:47.336 10:33:26 -- scripts/common.sh@345 -- # : 1 00:05:47.336 10:33:26 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.336 10:33:26 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.336 10:33:26 -- scripts/common.sh@365 -- # decimal 1 00:05:47.336 10:33:26 -- scripts/common.sh@353 -- # local d=1 00:05:47.336 10:33:26 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.336 10:33:26 -- scripts/common.sh@355 -- # echo 1 00:05:47.336 10:33:26 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.336 10:33:26 -- scripts/common.sh@366 -- # decimal 2 00:05:47.336 10:33:26 -- scripts/common.sh@353 -- # local d=2 00:05:47.336 10:33:26 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.336 10:33:26 -- scripts/common.sh@355 -- # echo 2 00:05:47.336 10:33:26 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.336 10:33:26 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.336 10:33:26 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.336 10:33:26 -- scripts/common.sh@368 -- # return 0 00:05:47.336 10:33:26 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.336 10:33:26 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.336 --rc genhtml_branch_coverage=1 00:05:47.336 --rc genhtml_function_coverage=1 00:05:47.336 --rc genhtml_legend=1 00:05:47.336 --rc geninfo_all_blocks=1 00:05:47.336 --rc geninfo_unexecuted_blocks=1 00:05:47.336 00:05:47.336 ' 00:05:47.336 10:33:26 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.336 --rc genhtml_branch_coverage=1 00:05:47.336 --rc genhtml_function_coverage=1 00:05:47.336 --rc genhtml_legend=1 00:05:47.336 --rc geninfo_all_blocks=1 00:05:47.336 --rc geninfo_unexecuted_blocks=1 00:05:47.336 00:05:47.336 ' 00:05:47.336 10:33:26 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.336 --rc genhtml_branch_coverage=1 00:05:47.336 --rc genhtml_function_coverage=1 00:05:47.336 --rc genhtml_legend=1 00:05:47.336 --rc geninfo_all_blocks=1 00:05:47.336 --rc geninfo_unexecuted_blocks=1 00:05:47.336 00:05:47.336 ' 00:05:47.336 10:33:26 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.336 --rc genhtml_branch_coverage=1 00:05:47.336 --rc genhtml_function_coverage=1 00:05:47.336 --rc genhtml_legend=1 00:05:47.336 --rc geninfo_all_blocks=1 00:05:47.336 --rc geninfo_unexecuted_blocks=1 00:05:47.336 00:05:47.336 ' 00:05:47.336 10:33:26 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.336 10:33:26 -- nvmf/common.sh@7 -- # uname -s 00:05:47.336 10:33:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.336 10:33:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.336 10:33:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.336 10:33:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.336 10:33:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.336 10:33:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.336 10:33:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.336 10:33:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.336 10:33:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.336 10:33:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.336 10:33:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:47.336 10:33:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:47.336 10:33:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.336 10:33:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.336 10:33:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:47.336 10:33:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.336 10:33:26 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.336 10:33:26 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:47.336 10:33:26 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.336 10:33:26 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.336 10:33:26 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.336 10:33:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.336 10:33:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.336 10:33:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.336 10:33:26 -- paths/export.sh@5 -- # export PATH 00:05:47.336 10:33:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.336 10:33:26 -- nvmf/common.sh@51 -- # : 0 00:05:47.336 10:33:26 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:47.336 10:33:26 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:47.336 10:33:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.336 10:33:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.336 10:33:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.336 10:33:26 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:47.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:47.336 10:33:26 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:47.336 10:33:26 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:47.336 10:33:26 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:47.336 10:33:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:47.336 10:33:26 -- spdk/autotest.sh@32 -- # uname -s 00:05:47.336 10:33:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:47.336 10:33:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:47.336 10:33:26 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:47.336 10:33:26 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:47.336 10:33:26 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:47.336 10:33:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:47.336 10:33:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:47.336 10:33:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:47.336 10:33:26 -- spdk/autotest.sh@48 -- # udevadm_pid=760003 00:05:47.336 10:33:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:47.336 10:33:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:47.336 10:33:26 -- pm/common@17 -- # local monitor 00:05:47.336 10:33:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:47.336 10:33:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:47.336 10:33:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:47.336 10:33:26 -- pm/common@21 -- # date +%s 00:05:47.336 10:33:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:47.598 10:33:26 -- pm/common@21 -- # date +%s 00:05:47.598 10:33:26 -- pm/common@25 -- # sleep 1 00:05:47.598 10:33:26 -- pm/common@21 -- # date +%s 00:05:47.598 10:33:26 -- pm/common@21 -- # date +%s 00:05:47.598 10:33:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008806 00:05:47.598 10:33:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008806 00:05:47.598 10:33:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008806 00:05:47.598 10:33:26 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008806 00:05:47.598 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008806_collect-cpu-load.pm.log 00:05:47.598 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008806_collect-vmstat.pm.log 00:05:47.598 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008806_collect-cpu-temp.pm.log 00:05:47.598 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008806_collect-bmc-pm.bmc.pm.log 00:05:48.542 10:33:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:48.542 10:33:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:48.542 10:33:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:48.542 10:33:27 -- common/autotest_common.sh@10 -- # set +x 00:05:48.542 10:33:27 -- spdk/autotest.sh@59 -- # create_test_list 00:05:48.542 10:33:27 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:48.542 10:33:27 -- common/autotest_common.sh@10 -- # set +x 00:05:48.542 10:33:27 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:48.542 10:33:27 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:48.542 10:33:27 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:48.542 10:33:27 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:48.542 10:33:27 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:48.542 10:33:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:48.542 10:33:27 -- common/autotest_common.sh@1457 -- # uname 00:05:48.542 10:33:27 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:48.542 10:33:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:48.542 10:33:27 -- common/autotest_common.sh@1477 -- # uname 00:05:48.542 10:33:27 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:48.542 10:33:27 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:48.542 10:33:27 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:48.542 lcov: LCOV version 1.15 00:05:48.542 10:33:27 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:06:03.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:03.460 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:06:21.583 10:33:57 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:21.583 10:33:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:21.583 10:33:57 -- common/autotest_common.sh@10 -- # set +x 00:06:21.583 10:33:57 -- spdk/autotest.sh@78 -- # rm -f 00:06:21.583 10:33:57 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:22.156 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:06:22.156 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:06:22.417 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:06:22.417 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:06:22.417 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:06:22.417 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:06:22.417 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:06:22.417 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:06:22.417 0000:65:00.0 (144d a80a): Already using the nvme driver 00:06:22.417 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:06:22.417 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:06:22.417 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:06:22.677 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:06:22.677 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:06:22.677 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:06:22.677 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:06:22.677 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:06:22.937 10:34:01 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:22.937 10:34:01 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:22.937 10:34:01 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:22.938 10:34:01 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:22.938 10:34:01 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:22.938 10:34:01 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:22.938 10:34:01 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:22.938 10:34:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:22.938 10:34:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:22.938 10:34:02 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:22.938 10:34:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:22.938 10:34:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:22.938 10:34:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:22.938 10:34:02 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:22.938 10:34:02 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:22.938 No valid GPT data, bailing 00:06:22.938 10:34:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:22.938 10:34:02 -- scripts/common.sh@394 -- # pt= 00:06:22.938 10:34:02 -- scripts/common.sh@395 -- # return 1 00:06:22.938 10:34:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:22.938 1+0 records in 00:06:22.938 1+0 records out 00:06:22.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00519562 s, 202 MB/s 00:06:22.938 10:34:02 -- spdk/autotest.sh@105 -- # sync 00:06:22.938 10:34:02 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:22.938 10:34:02 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:22.938 10:34:02 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:32.942 10:34:10 -- spdk/autotest.sh@111 -- # uname -s 00:06:32.942 10:34:10 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:32.942 10:34:10 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:32.942 10:34:10 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:34.856 Hugepages 00:06:34.856 node hugesize free / total 00:06:35.117 node0 1048576kB 0 / 0 00:06:35.117 node0 2048kB 0 / 0 00:06:35.117 node1 1048576kB 0 / 0 00:06:35.117 node1 2048kB 0 / 0 00:06:35.117 00:06:35.117 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:35.117 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:06:35.117 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:06:35.117 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:06:35.117 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:06:35.117 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:06:35.118 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:06:35.118 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:06:35.118 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:06:35.118 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:06:35.118 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:06:35.118 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:06:35.118 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:06:35.118 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:06:35.118 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:06:35.118 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:06:35.118 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:06:35.118 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:06:35.118 10:34:14 -- spdk/autotest.sh@117 -- # uname -s 00:06:35.118 10:34:14 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:35.118 10:34:14 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:35.118 10:34:14 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:39.324 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:39.324 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:39.324 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:39.324 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:39.324 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:39.324 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:39.324 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:39.324 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:39.324 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:39.324 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:39.324 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:39.324 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:39.324 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:39.324 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:39.324 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:39.324 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:40.709 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:40.972 10:34:20 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:41.916 10:34:21 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:41.916 10:34:21 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:41.916 10:34:21 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:41.916 10:34:21 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:41.916 10:34:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:41.916 10:34:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:41.916 10:34:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:41.916 10:34:21 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:41.916 10:34:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:41.916 10:34:21 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:41.916 10:34:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:06:41.916 10:34:21 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:46.128 Waiting for block devices as requested 00:06:46.128 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:46.128 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:46.128 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:46.128 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:46.128 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:46.128 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:46.128 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:46.128 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:46.128 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:06:46.389 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:46.389 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:46.389 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:46.651 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:46.651 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:46.651 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:46.912 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:46.912 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:47.173 10:34:26 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:47.173 10:34:26 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:06:47.173 10:34:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:47.173 10:34:26 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:06:47.173 10:34:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:47.173 10:34:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:06:47.173 10:34:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:47.173 10:34:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:47.173 10:34:26 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:47.173 10:34:26 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:47.173 10:34:26 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:47.173 10:34:26 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:47.173 10:34:26 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:47.173 10:34:26 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:06:47.173 10:34:26 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:47.173 10:34:26 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:47.173 10:34:26 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:47.173 10:34:26 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:47.173 10:34:26 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:47.173 10:34:26 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:47.173 10:34:26 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:47.173 10:34:26 -- common/autotest_common.sh@1543 -- # continue 00:06:47.173 10:34:26 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:47.173 10:34:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:47.173 10:34:26 -- common/autotest_common.sh@10 -- # set +x 00:06:47.173 10:34:26 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:47.173 10:34:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:47.173 10:34:26 -- common/autotest_common.sh@10 -- # set +x 00:06:47.173 10:34:26 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:51.387 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:51.387 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:51.387 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:51.387 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:51.387 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:51.387 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:51.387 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:51.387 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:51.387 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:51.387 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:51.387 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:51.387 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:51.387 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:51.387 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:51.387 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:51.387 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:51.387 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:51.387 10:34:30 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:51.387 10:34:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.387 10:34:30 -- common/autotest_common.sh@10 -- # set +x 00:06:51.387 10:34:30 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:51.387 10:34:30 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:51.387 10:34:30 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:51.387 10:34:30 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:51.387 10:34:30 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:51.387 10:34:30 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:51.387 10:34:30 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:51.387 10:34:30 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:51.388 10:34:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:51.388 10:34:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:51.388 10:34:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:51.388 10:34:30 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:51.388 10:34:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:51.388 10:34:30 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:51.388 10:34:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:06:51.388 10:34:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:51.388 10:34:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:06:51.388 10:34:30 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:06:51.388 10:34:30 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:06:51.388 10:34:30 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:51.388 10:34:30 -- common/autotest_common.sh@1572 -- # return 0 00:06:51.388 10:34:30 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:51.388 10:34:30 -- common/autotest_common.sh@1580 -- # return 0 00:06:51.388 10:34:30 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:51.388 10:34:30 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:51.388 10:34:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:51.388 10:34:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:51.388 10:34:30 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:51.388 10:34:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:51.388 10:34:30 -- common/autotest_common.sh@10 -- # set +x 00:06:51.388 10:34:30 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:51.388 10:34:30 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:51.388 10:34:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.388 10:34:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.388 10:34:30 -- common/autotest_common.sh@10 -- # set +x 00:06:51.650 ************************************ 00:06:51.650 START TEST env 00:06:51.650 ************************************ 00:06:51.650 10:34:30 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:51.650 * Looking for test storage... 00:06:51.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:51.650 10:34:30 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:51.650 10:34:30 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:51.650 10:34:30 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:51.650 10:34:30 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:51.650 10:34:30 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.650 10:34:30 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.650 10:34:30 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.650 10:34:30 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.650 10:34:30 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.650 10:34:30 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.650 10:34:30 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.650 10:34:30 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.650 10:34:30 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.650 10:34:30 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.650 10:34:30 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.650 10:34:30 env -- scripts/common.sh@344 -- # case "$op" in 00:06:51.650 10:34:30 env -- scripts/common.sh@345 -- # : 1 00:06:51.650 10:34:30 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.650 10:34:30 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.650 10:34:30 env -- scripts/common.sh@365 -- # decimal 1 00:06:51.650 10:34:30 env -- scripts/common.sh@353 -- # local d=1 00:06:51.650 10:34:30 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.650 10:34:30 env -- scripts/common.sh@355 -- # echo 1 00:06:51.650 10:34:30 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.650 10:34:30 env -- scripts/common.sh@366 -- # decimal 2 00:06:51.651 10:34:30 env -- scripts/common.sh@353 -- # local d=2 00:06:51.651 10:34:30 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.651 10:34:30 env -- scripts/common.sh@355 -- # echo 2 00:06:51.651 10:34:30 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.651 10:34:30 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.651 10:34:30 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.651 10:34:30 env -- scripts/common.sh@368 -- # return 0 00:06:51.651 10:34:30 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.651 10:34:30 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:51.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.651 --rc genhtml_branch_coverage=1 00:06:51.651 --rc genhtml_function_coverage=1 00:06:51.651 --rc genhtml_legend=1 00:06:51.651 --rc geninfo_all_blocks=1 00:06:51.651 --rc geninfo_unexecuted_blocks=1 00:06:51.651 00:06:51.651 ' 00:06:51.651 10:34:30 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:51.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.651 --rc genhtml_branch_coverage=1 00:06:51.651 --rc genhtml_function_coverage=1 00:06:51.651 --rc genhtml_legend=1 00:06:51.651 --rc geninfo_all_blocks=1 00:06:51.651 --rc geninfo_unexecuted_blocks=1 00:06:51.651 00:06:51.651 ' 00:06:51.651 10:34:30 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:51.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.651 --rc genhtml_branch_coverage=1 00:06:51.651 --rc genhtml_function_coverage=1 00:06:51.651 --rc genhtml_legend=1 00:06:51.651 --rc geninfo_all_blocks=1 00:06:51.651 --rc geninfo_unexecuted_blocks=1 00:06:51.651 00:06:51.651 ' 00:06:51.651 10:34:30 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:51.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.651 --rc genhtml_branch_coverage=1 00:06:51.651 --rc genhtml_function_coverage=1 00:06:51.651 --rc genhtml_legend=1 00:06:51.651 --rc geninfo_all_blocks=1 00:06:51.651 --rc geninfo_unexecuted_blocks=1 00:06:51.651 00:06:51.651 ' 00:06:51.651 10:34:30 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:51.651 10:34:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.651 10:34:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.651 10:34:30 env -- common/autotest_common.sh@10 -- # set +x 00:06:51.651 ************************************ 00:06:51.651 START TEST env_memory 00:06:51.651 ************************************ 00:06:51.651 10:34:30 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:51.913 00:06:51.913 00:06:51.913 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.913 http://cunit.sourceforge.net/ 00:06:51.913 00:06:51.913 00:06:51.913 Suite: memory 00:06:51.913 Test: alloc and free memory map ...[2024-11-19 10:34:30.895370] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:51.913 passed 00:06:51.913 Test: mem map translation ...[2024-11-19 10:34:30.921022] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:51.913 [2024-11-19 10:34:30.921052] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:51.913 [2024-11-19 10:34:30.921098] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:51.913 [2024-11-19 10:34:30.921106] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:51.913 passed 00:06:51.913 Test: mem map registration ...[2024-11-19 10:34:30.976397] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:51.913 [2024-11-19 10:34:30.976429] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:51.913 passed 00:06:51.913 Test: mem map adjacent registrations ...passed 00:06:51.913 00:06:51.913 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.913 suites 1 1 n/a 0 0 00:06:51.913 tests 4 4 4 0 0 00:06:51.913 asserts 152 152 152 0 n/a 00:06:51.913 00:06:51.913 Elapsed time = 0.193 seconds 00:06:51.913 00:06:51.913 real 0m0.208s 00:06:51.913 user 0m0.198s 00:06:51.913 sys 0m0.009s 00:06:51.913 10:34:31 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.913 10:34:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:51.913 ************************************ 00:06:51.913 END TEST env_memory 00:06:51.913 ************************************ 00:06:51.913 10:34:31 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:51.913 10:34:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.913 10:34:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.913 10:34:31 env -- common/autotest_common.sh@10 -- # set +x 00:06:52.175 ************************************ 00:06:52.175 START TEST env_vtophys 00:06:52.175 ************************************ 00:06:52.175 10:34:31 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:52.175 EAL: lib.eal log level changed from notice to debug 00:06:52.175 EAL: Detected lcore 0 as core 0 on socket 0 00:06:52.175 EAL: Detected lcore 1 as core 1 on socket 0 00:06:52.175 EAL: Detected lcore 2 as core 2 on socket 0 00:06:52.175 EAL: Detected lcore 3 as core 3 on socket 0 00:06:52.175 EAL: Detected lcore 4 as core 4 on socket 0 00:06:52.175 EAL: Detected lcore 5 as core 5 on socket 0 00:06:52.175 EAL: Detected lcore 6 as core 6 on socket 0 00:06:52.175 EAL: Detected lcore 7 as core 7 on socket 0 00:06:52.175 EAL: Detected lcore 8 as core 8 on socket 0 00:06:52.175 EAL: Detected lcore 9 as core 9 on socket 0 00:06:52.175 EAL: Detected lcore 10 as core 10 on socket 0 00:06:52.175 EAL: Detected lcore 11 as core 11 on socket 0 00:06:52.175 EAL: Detected lcore 12 as core 12 on socket 0 00:06:52.175 EAL: Detected lcore 13 as core 13 on socket 0 00:06:52.175 EAL: Detected lcore 14 as core 14 on socket 0 00:06:52.175 EAL: Detected lcore 15 as core 15 on socket 0 00:06:52.175 EAL: Detected lcore 16 as core 16 on socket 0 00:06:52.175 EAL: Detected lcore 17 as core 17 on socket 0 00:06:52.175 EAL: Detected lcore 18 as core 18 on socket 0 00:06:52.175 EAL: Detected lcore 19 as core 19 on socket 0 00:06:52.175 EAL: Detected lcore 20 as core 20 on socket 0 00:06:52.175 EAL: Detected lcore 21 as core 21 on socket 0 00:06:52.175 EAL: Detected lcore 22 as core 22 on socket 0 00:06:52.175 EAL: Detected lcore 23 as core 23 on socket 0 00:06:52.175 EAL: Detected lcore 24 as core 24 on socket 0 00:06:52.175 EAL: Detected lcore 25 as core 25 on socket 0 00:06:52.175 EAL: Detected lcore 26 as core 26 on socket 0 00:06:52.175 EAL: Detected lcore 27 as core 27 on socket 0 00:06:52.175 EAL: Detected lcore 28 as core 28 on socket 0 00:06:52.175 EAL: Detected lcore 29 as core 29 on socket 0 00:06:52.175 EAL: Detected lcore 30 as core 30 on socket 0 00:06:52.175 EAL: Detected lcore 31 as core 31 on socket 0 00:06:52.175 EAL: Detected lcore 32 as core 32 on socket 0 00:06:52.175 EAL: Detected lcore 33 as core 33 on socket 0 00:06:52.175 EAL: Detected lcore 34 as core 34 on socket 0 00:06:52.175 EAL: Detected lcore 35 as core 35 on socket 0 00:06:52.175 EAL: Detected lcore 36 as core 0 on socket 1 00:06:52.175 EAL: Detected lcore 37 as core 1 on socket 1 00:06:52.175 EAL: Detected lcore 38 as core 2 on socket 1 00:06:52.175 EAL: Detected lcore 39 as core 3 on socket 1 00:06:52.175 EAL: Detected lcore 40 as core 4 on socket 1 00:06:52.175 EAL: Detected lcore 41 as core 5 on socket 1 00:06:52.175 EAL: Detected lcore 42 as core 6 on socket 1 00:06:52.175 EAL: Detected lcore 43 as core 7 on socket 1 00:06:52.175 EAL: Detected lcore 44 as core 8 on socket 1 00:06:52.175 EAL: Detected lcore 45 as core 9 on socket 1 00:06:52.175 EAL: Detected lcore 46 as core 10 on socket 1 00:06:52.175 EAL: Detected lcore 47 as core 11 on socket 1 00:06:52.175 EAL: Detected lcore 48 as core 12 on socket 1 00:06:52.175 EAL: Detected lcore 49 as core 13 on socket 1 00:06:52.175 EAL: Detected lcore 50 as core 14 on socket 1 00:06:52.175 EAL: Detected lcore 51 as core 15 on socket 1 00:06:52.175 EAL: Detected lcore 52 as core 16 on socket 1 00:06:52.175 EAL: Detected lcore 53 as core 17 on socket 1 00:06:52.175 EAL: Detected lcore 54 as core 18 on socket 1 00:06:52.175 EAL: Detected lcore 55 as core 19 on socket 1 00:06:52.175 EAL: Detected lcore 56 as core 20 on socket 1 00:06:52.175 EAL: Detected lcore 57 as core 21 on socket 1 00:06:52.175 EAL: Detected lcore 58 as core 22 on socket 1 00:06:52.175 EAL: Detected lcore 59 as core 23 on socket 1 00:06:52.175 EAL: Detected lcore 60 as core 24 on socket 1 00:06:52.175 EAL: Detected lcore 61 as core 25 on socket 1 00:06:52.175 EAL: Detected lcore 62 as core 26 on socket 1 00:06:52.175 EAL: Detected lcore 63 as core 27 on socket 1 00:06:52.175 EAL: Detected lcore 64 as core 28 on socket 1 00:06:52.175 EAL: Detected lcore 65 as core 29 on socket 1 00:06:52.175 EAL: Detected lcore 66 as core 30 on socket 1 00:06:52.175 EAL: Detected lcore 67 as core 31 on socket 1 00:06:52.175 EAL: Detected lcore 68 as core 32 on socket 1 00:06:52.175 EAL: Detected lcore 69 as core 33 on socket 1 00:06:52.175 EAL: Detected lcore 70 as core 34 on socket 1 00:06:52.175 EAL: Detected lcore 71 as core 35 on socket 1 00:06:52.175 EAL: Detected lcore 72 as core 0 on socket 0 00:06:52.175 EAL: Detected lcore 73 as core 1 on socket 0 00:06:52.175 EAL: Detected lcore 74 as core 2 on socket 0 00:06:52.175 EAL: Detected lcore 75 as core 3 on socket 0 00:06:52.175 EAL: Detected lcore 76 as core 4 on socket 0 00:06:52.175 EAL: Detected lcore 77 as core 5 on socket 0 00:06:52.175 EAL: Detected lcore 78 as core 6 on socket 0 00:06:52.175 EAL: Detected lcore 79 as core 7 on socket 0 00:06:52.175 EAL: Detected lcore 80 as core 8 on socket 0 00:06:52.175 EAL: Detected lcore 81 as core 9 on socket 0 00:06:52.175 EAL: Detected lcore 82 as core 10 on socket 0 00:06:52.175 EAL: Detected lcore 83 as core 11 on socket 0 00:06:52.175 EAL: Detected lcore 84 as core 12 on socket 0 00:06:52.175 EAL: Detected lcore 85 as core 13 on socket 0 00:06:52.175 EAL: Detected lcore 86 as core 14 on socket 0 00:06:52.175 EAL: Detected lcore 87 as core 15 on socket 0 00:06:52.175 EAL: Detected lcore 88 as core 16 on socket 0 00:06:52.175 EAL: Detected lcore 89 as core 17 on socket 0 00:06:52.175 EAL: Detected lcore 90 as core 18 on socket 0 00:06:52.175 EAL: Detected lcore 91 as core 19 on socket 0 00:06:52.175 EAL: Detected lcore 92 as core 20 on socket 0 00:06:52.175 EAL: Detected lcore 93 as core 21 on socket 0 00:06:52.175 EAL: Detected lcore 94 as core 22 on socket 0 00:06:52.175 EAL: Detected lcore 95 as core 23 on socket 0 00:06:52.175 EAL: Detected lcore 96 as core 24 on socket 0 00:06:52.175 EAL: Detected lcore 97 as core 25 on socket 0 00:06:52.175 EAL: Detected lcore 98 as core 26 on socket 0 00:06:52.175 EAL: Detected lcore 99 as core 27 on socket 0 00:06:52.175 EAL: Detected lcore 100 as core 28 on socket 0 00:06:52.175 EAL: Detected lcore 101 as core 29 on socket 0 00:06:52.175 EAL: Detected lcore 102 as core 30 on socket 0 00:06:52.175 EAL: Detected lcore 103 as core 31 on socket 0 00:06:52.175 EAL: Detected lcore 104 as core 32 on socket 0 00:06:52.175 EAL: Detected lcore 105 as core 33 on socket 0 00:06:52.175 EAL: Detected lcore 106 as core 34 on socket 0 00:06:52.175 EAL: Detected lcore 107 as core 35 on socket 0 00:06:52.175 EAL: Detected lcore 108 as core 0 on socket 1 00:06:52.175 EAL: Detected lcore 109 as core 1 on socket 1 00:06:52.175 EAL: Detected lcore 110 as core 2 on socket 1 00:06:52.175 EAL: Detected lcore 111 as core 3 on socket 1 00:06:52.175 EAL: Detected lcore 112 as core 4 on socket 1 00:06:52.175 EAL: Detected lcore 113 as core 5 on socket 1 00:06:52.175 EAL: Detected lcore 114 as core 6 on socket 1 00:06:52.175 EAL: Detected lcore 115 as core 7 on socket 1 00:06:52.175 EAL: Detected lcore 116 as core 8 on socket 1 00:06:52.175 EAL: Detected lcore 117 as core 9 on socket 1 00:06:52.175 EAL: Detected lcore 118 as core 10 on socket 1 00:06:52.175 EAL: Detected lcore 119 as core 11 on socket 1 00:06:52.175 EAL: Detected lcore 120 as core 12 on socket 1 00:06:52.175 EAL: Detected lcore 121 as core 13 on socket 1 00:06:52.175 EAL: Detected lcore 122 as core 14 on socket 1 00:06:52.175 EAL: Detected lcore 123 as core 15 on socket 1 00:06:52.175 EAL: Detected lcore 124 as core 16 on socket 1 00:06:52.175 EAL: Detected lcore 125 as core 17 on socket 1 00:06:52.175 EAL: Detected lcore 126 as core 18 on socket 1 00:06:52.175 EAL: Detected lcore 127 as core 19 on socket 1 00:06:52.175 EAL: Skipped lcore 128 as core 20 on socket 1 00:06:52.175 EAL: Skipped lcore 129 as core 21 on socket 1 00:06:52.175 EAL: Skipped lcore 130 as core 22 on socket 1 00:06:52.175 EAL: Skipped lcore 131 as core 23 on socket 1 00:06:52.175 EAL: Skipped lcore 132 as core 24 on socket 1 00:06:52.175 EAL: Skipped lcore 133 as core 25 on socket 1 00:06:52.175 EAL: Skipped lcore 134 as core 26 on socket 1 00:06:52.175 EAL: Skipped lcore 135 as core 27 on socket 1 00:06:52.175 EAL: Skipped lcore 136 as core 28 on socket 1 00:06:52.175 EAL: Skipped lcore 137 as core 29 on socket 1 00:06:52.176 EAL: Skipped lcore 138 as core 30 on socket 1 00:06:52.176 EAL: Skipped lcore 139 as core 31 on socket 1 00:06:52.176 EAL: Skipped lcore 140 as core 32 on socket 1 00:06:52.176 EAL: Skipped lcore 141 as core 33 on socket 1 00:06:52.176 EAL: Skipped lcore 142 as core 34 on socket 1 00:06:52.176 EAL: Skipped lcore 143 as core 35 on socket 1 00:06:52.176 EAL: Maximum logical cores by configuration: 128 00:06:52.176 EAL: Detected CPU lcores: 128 00:06:52.176 EAL: Detected NUMA nodes: 2 00:06:52.176 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:52.176 EAL: Detected shared linkage of DPDK 00:06:52.176 EAL: No shared files mode enabled, IPC will be disabled 00:06:52.176 EAL: Bus pci wants IOVA as 'DC' 00:06:52.176 EAL: Buses did not request a specific IOVA mode. 00:06:52.176 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:52.176 EAL: Selected IOVA mode 'VA' 00:06:52.176 EAL: Probing VFIO support... 00:06:52.176 EAL: IOMMU type 1 (Type 1) is supported 00:06:52.176 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:52.176 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:52.176 EAL: VFIO support initialized 00:06:52.176 EAL: Ask a virtual area of 0x2e000 bytes 00:06:52.176 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:52.176 EAL: Setting up physically contiguous memory... 00:06:52.176 EAL: Setting maximum number of open files to 524288 00:06:52.176 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:52.176 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:52.176 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:52.176 EAL: Ask a virtual area of 0x61000 bytes 00:06:52.176 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:52.176 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:52.176 EAL: Ask a virtual area of 0x400000000 bytes 00:06:52.176 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:52.176 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:52.176 EAL: Ask a virtual area of 0x61000 bytes 00:06:52.176 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:52.176 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:52.176 EAL: Ask a virtual area of 0x400000000 bytes 00:06:52.176 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:52.176 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:52.176 EAL: Ask a virtual area of 0x61000 bytes 00:06:52.176 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:52.176 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:52.176 EAL: Ask a virtual area of 0x400000000 bytes 00:06:52.176 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:52.176 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:52.176 EAL: Ask a virtual area of 0x61000 bytes 00:06:52.176 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:52.176 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:52.176 EAL: Ask a virtual area of 0x400000000 bytes 00:06:52.176 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:52.176 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:52.176 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:52.176 EAL: Ask a virtual area of 0x61000 bytes 00:06:52.176 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:52.176 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:52.176 EAL: Ask a virtual area of 0x400000000 bytes 00:06:52.176 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:52.176 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:52.176 EAL: Ask a virtual area of 0x61000 bytes 00:06:52.176 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:52.176 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:52.176 EAL: Ask a virtual area of 0x400000000 bytes 00:06:52.176 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:52.176 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:52.176 EAL: Ask a virtual area of 0x61000 bytes 00:06:52.176 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:52.176 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:52.176 EAL: Ask a virtual area of 0x400000000 bytes 00:06:52.176 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:52.176 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:52.176 EAL: Ask a virtual area of 0x61000 bytes 00:06:52.176 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:52.176 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:52.176 EAL: Ask a virtual area of 0x400000000 bytes 00:06:52.176 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:52.176 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:52.176 EAL: Hugepages will be freed exactly as allocated. 00:06:52.176 EAL: No shared files mode enabled, IPC is disabled 00:06:52.176 EAL: No shared files mode enabled, IPC is disabled 00:06:52.176 EAL: TSC frequency is ~2400000 KHz 00:06:52.176 EAL: Main lcore 0 is ready (tid=7faaf59bda00;cpuset=[0]) 00:06:52.176 EAL: Trying to obtain current memory policy. 00:06:52.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:52.176 EAL: Restoring previous memory policy: 0 00:06:52.176 EAL: request: mp_malloc_sync 00:06:52.176 EAL: No shared files mode enabled, IPC is disabled 00:06:52.176 EAL: Heap on socket 0 was expanded by 2MB 00:06:52.176 EAL: No shared files mode enabled, IPC is disabled 00:06:52.176 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:52.176 EAL: Mem event callback 'spdk:(nil)' registered 00:06:52.176 00:06:52.176 00:06:52.176 CUnit - A unit testing framework for C - Version 2.1-3 00:06:52.176 http://cunit.sourceforge.net/ 00:06:52.176 00:06:52.176 00:06:52.176 Suite: components_suite 00:06:52.176 Test: vtophys_malloc_test ...passed 00:06:52.176 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:52.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:52.176 EAL: Restoring previous memory policy: 4 00:06:52.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.176 EAL: request: mp_malloc_sync 00:06:52.176 EAL: No shared files mode enabled, IPC is disabled 00:06:52.176 EAL: Heap on socket 0 was expanded by 4MB 00:06:52.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.176 EAL: request: mp_malloc_sync 00:06:52.176 EAL: No shared files mode enabled, IPC is disabled 00:06:52.176 EAL: Heap on socket 0 was shrunk by 4MB 00:06:52.176 EAL: Trying to obtain current memory policy. 00:06:52.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:52.176 EAL: Restoring previous memory policy: 4 00:06:52.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.176 EAL: request: mp_malloc_sync 00:06:52.176 EAL: No shared files mode enabled, IPC is disabled 00:06:52.176 EAL: Heap on socket 0 was expanded by 6MB 00:06:52.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.176 EAL: request: mp_malloc_sync 00:06:52.176 EAL: No shared files mode enabled, IPC is disabled 00:06:52.176 EAL: Heap on socket 0 was shrunk by 6MB 00:06:52.176 EAL: Trying to obtain current memory policy. 00:06:52.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:52.176 EAL: Restoring previous memory policy: 4 00:06:52.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.176 EAL: request: mp_malloc_sync 00:06:52.176 EAL: No shared files mode enabled, IPC is disabled 00:06:52.176 EAL: Heap on socket 0 was expanded by 10MB 00:06:52.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.176 EAL: request: mp_malloc_sync 00:06:52.176 EAL: No shared files mode enabled, IPC is disabled 00:06:52.176 EAL: Heap on socket 0 was shrunk by 10MB 00:06:52.176 EAL: Trying to obtain current memory policy. 00:06:52.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:52.176 EAL: Restoring previous memory policy: 4 00:06:52.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.176 EAL: request: mp_malloc_sync 00:06:52.176 EAL: No shared files mode enabled, IPC is disabled 00:06:52.176 EAL: Heap on socket 0 was expanded by 18MB 00:06:52.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.176 EAL: request: mp_malloc_sync 00:06:52.176 EAL: No shared files mode enabled, IPC is disabled 00:06:52.176 EAL: Heap on socket 0 was shrunk by 18MB 00:06:52.176 EAL: Trying to obtain current memory policy. 00:06:52.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:52.176 EAL: Restoring previous memory policy: 4 00:06:52.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.176 EAL: request: mp_malloc_sync 00:06:52.176 EAL: No shared files mode enabled, IPC is disabled 00:06:52.176 EAL: Heap on socket 0 was expanded by 34MB 00:06:52.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.176 EAL: request: mp_malloc_sync 00:06:52.176 EAL: No shared files mode enabled, IPC is disabled 00:06:52.176 EAL: Heap on socket 0 was shrunk by 34MB 00:06:52.176 EAL: Trying to obtain current memory policy. 00:06:52.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:52.176 EAL: Restoring previous memory policy: 4 00:06:52.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.176 EAL: request: mp_malloc_sync 00:06:52.176 EAL: No shared files mode enabled, IPC is disabled 00:06:52.176 EAL: Heap on socket 0 was expanded by 66MB 00:06:52.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.176 EAL: request: mp_malloc_sync 00:06:52.176 EAL: No shared files mode enabled, IPC is disabled 00:06:52.176 EAL: Heap on socket 0 was shrunk by 66MB 00:06:52.176 EAL: Trying to obtain current memory policy. 00:06:52.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:52.176 EAL: Restoring previous memory policy: 4 00:06:52.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.176 EAL: request: mp_malloc_sync 00:06:52.176 EAL: No shared files mode enabled, IPC is disabled 00:06:52.176 EAL: Heap on socket 0 was expanded by 130MB 00:06:52.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.176 EAL: request: mp_malloc_sync 00:06:52.176 EAL: No shared files mode enabled, IPC is disabled 00:06:52.176 EAL: Heap on socket 0 was shrunk by 130MB 00:06:52.176 EAL: Trying to obtain current memory policy. 00:06:52.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:52.437 EAL: Restoring previous memory policy: 4 00:06:52.437 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.437 EAL: request: mp_malloc_sync 00:06:52.437 EAL: No shared files mode enabled, IPC is disabled 00:06:52.438 EAL: Heap on socket 0 was expanded by 258MB 00:06:52.438 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.438 EAL: request: mp_malloc_sync 00:06:52.438 EAL: No shared files mode enabled, IPC is disabled 00:06:52.438 EAL: Heap on socket 0 was shrunk by 258MB 00:06:52.438 EAL: Trying to obtain current memory policy. 00:06:52.438 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:52.438 EAL: Restoring previous memory policy: 4 00:06:52.438 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.438 EAL: request: mp_malloc_sync 00:06:52.438 EAL: No shared files mode enabled, IPC is disabled 00:06:52.438 EAL: Heap on socket 0 was expanded by 514MB 00:06:52.438 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.438 EAL: request: mp_malloc_sync 00:06:52.438 EAL: No shared files mode enabled, IPC is disabled 00:06:52.438 EAL: Heap on socket 0 was shrunk by 514MB 00:06:52.438 EAL: Trying to obtain current memory policy. 00:06:52.438 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:52.698 EAL: Restoring previous memory policy: 4 00:06:52.698 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.698 EAL: request: mp_malloc_sync 00:06:52.698 EAL: No shared files mode enabled, IPC is disabled 00:06:52.698 EAL: Heap on socket 0 was expanded by 1026MB 00:06:52.698 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.959 EAL: request: mp_malloc_sync 00:06:52.959 EAL: No shared files mode enabled, IPC is disabled 00:06:52.959 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:52.959 passed 00:06:52.959 00:06:52.959 Run Summary: Type Total Ran Passed Failed Inactive 00:06:52.959 suites 1 1 n/a 0 0 00:06:52.959 tests 2 2 2 0 0 00:06:52.959 asserts 497 497 497 0 n/a 00:06:52.959 00:06:52.959 Elapsed time = 0.689 seconds 00:06:52.959 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.959 EAL: request: mp_malloc_sync 00:06:52.959 EAL: No shared files mode enabled, IPC is disabled 00:06:52.959 EAL: Heap on socket 0 was shrunk by 2MB 00:06:52.959 EAL: No shared files mode enabled, IPC is disabled 00:06:52.959 EAL: No shared files mode enabled, IPC is disabled 00:06:52.959 EAL: No shared files mode enabled, IPC is disabled 00:06:52.959 00:06:52.959 real 0m0.841s 00:06:52.959 user 0m0.444s 00:06:52.959 sys 0m0.369s 00:06:52.959 10:34:31 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.959 10:34:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:52.959 ************************************ 00:06:52.959 END TEST env_vtophys 00:06:52.959 ************************************ 00:06:52.959 10:34:32 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:52.959 10:34:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.959 10:34:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.959 10:34:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:52.959 ************************************ 00:06:52.959 START TEST env_pci 00:06:52.959 ************************************ 00:06:52.959 10:34:32 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:52.959 00:06:52.959 00:06:52.959 CUnit - A unit testing framework for C - Version 2.1-3 00:06:52.959 http://cunit.sourceforge.net/ 00:06:52.959 00:06:52.959 00:06:52.959 Suite: pci 00:06:52.959 Test: pci_hook ...[2024-11-19 10:34:32.070183] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 779396 has claimed it 00:06:52.959 EAL: Cannot find device (10000:00:01.0) 00:06:52.959 EAL: Failed to attach device on primary process 00:06:52.959 passed 00:06:52.959 00:06:52.959 Run Summary: Type Total Ran Passed Failed Inactive 00:06:52.959 suites 1 1 n/a 0 0 00:06:52.959 tests 1 1 1 0 0 00:06:52.959 asserts 25 25 25 0 n/a 00:06:52.959 00:06:52.959 Elapsed time = 0.030 seconds 00:06:52.959 00:06:52.959 real 0m0.052s 00:06:52.959 user 0m0.014s 00:06:52.959 sys 0m0.037s 00:06:52.959 10:34:32 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.959 10:34:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:52.959 ************************************ 00:06:52.959 END TEST env_pci 00:06:52.959 ************************************ 00:06:52.959 10:34:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:52.959 10:34:32 env -- env/env.sh@15 -- # uname 00:06:52.959 10:34:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:52.959 10:34:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:52.959 10:34:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:52.959 10:34:32 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:52.959 10:34:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.959 10:34:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:53.220 ************************************ 00:06:53.220 START TEST env_dpdk_post_init 00:06:53.220 ************************************ 00:06:53.220 10:34:32 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:53.220 EAL: Detected CPU lcores: 128 00:06:53.220 EAL: Detected NUMA nodes: 2 00:06:53.220 EAL: Detected shared linkage of DPDK 00:06:53.220 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:53.220 EAL: Selected IOVA mode 'VA' 00:06:53.220 EAL: VFIO support initialized 00:06:53.220 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:53.220 EAL: Using IOMMU type 1 (Type 1) 00:06:53.480 EAL: Ignore mapping IO port bar(1) 00:06:53.480 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:06:53.741 EAL: Ignore mapping IO port bar(1) 00:06:53.741 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:06:53.741 EAL: Ignore mapping IO port bar(1) 00:06:54.001 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:06:54.001 EAL: Ignore mapping IO port bar(1) 00:06:54.262 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:06:54.262 EAL: Ignore mapping IO port bar(1) 00:06:54.522 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:06:54.522 EAL: Ignore mapping IO port bar(1) 00:06:54.522 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:06:54.783 EAL: Ignore mapping IO port bar(1) 00:06:54.783 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:06:55.043 EAL: Ignore mapping IO port bar(1) 00:06:55.043 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:06:55.304 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:06:55.304 EAL: Ignore mapping IO port bar(1) 00:06:55.564 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:06:55.564 EAL: Ignore mapping IO port bar(1) 00:06:55.825 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:06:55.825 EAL: Ignore mapping IO port bar(1) 00:06:56.085 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:06:56.085 EAL: Ignore mapping IO port bar(1) 00:06:56.086 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:06:56.346 EAL: Ignore mapping IO port bar(1) 00:06:56.346 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:06:56.607 EAL: Ignore mapping IO port bar(1) 00:06:56.607 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:06:56.868 EAL: Ignore mapping IO port bar(1) 00:06:56.868 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:06:56.868 EAL: Ignore mapping IO port bar(1) 00:06:57.128 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:06:57.128 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:06:57.128 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:06:57.128 Starting DPDK initialization... 00:06:57.128 Starting SPDK post initialization... 00:06:57.128 SPDK NVMe probe 00:06:57.128 Attaching to 0000:65:00.0 00:06:57.128 Attached to 0000:65:00.0 00:06:57.128 Cleaning up... 00:06:59.041 00:06:59.041 real 0m5.748s 00:06:59.041 user 0m0.101s 00:06:59.041 sys 0m0.202s 00:06:59.042 10:34:37 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.042 10:34:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:59.042 ************************************ 00:06:59.042 END TEST env_dpdk_post_init 00:06:59.042 ************************************ 00:06:59.042 10:34:37 env -- env/env.sh@26 -- # uname 00:06:59.042 10:34:37 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:59.042 10:34:37 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:59.042 10:34:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.042 10:34:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.042 10:34:37 env -- common/autotest_common.sh@10 -- # set +x 00:06:59.042 ************************************ 00:06:59.042 START TEST env_mem_callbacks 00:06:59.042 ************************************ 00:06:59.042 10:34:38 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:59.042 EAL: Detected CPU lcores: 128 00:06:59.042 EAL: Detected NUMA nodes: 2 00:06:59.042 EAL: Detected shared linkage of DPDK 00:06:59.042 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:59.042 EAL: Selected IOVA mode 'VA' 00:06:59.042 EAL: VFIO support initialized 00:06:59.042 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:59.042 00:06:59.042 00:06:59.042 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.042 http://cunit.sourceforge.net/ 00:06:59.042 00:06:59.042 00:06:59.042 Suite: memory 00:06:59.042 Test: test ... 00:06:59.042 register 0x200000200000 2097152 00:06:59.042 malloc 3145728 00:06:59.042 register 0x200000400000 4194304 00:06:59.042 buf 0x200000500000 len 3145728 PASSED 00:06:59.042 malloc 64 00:06:59.042 buf 0x2000004fff40 len 64 PASSED 00:06:59.042 malloc 4194304 00:06:59.042 register 0x200000800000 6291456 00:06:59.042 buf 0x200000a00000 len 4194304 PASSED 00:06:59.042 free 0x200000500000 3145728 00:06:59.042 free 0x2000004fff40 64 00:06:59.042 unregister 0x200000400000 4194304 PASSED 00:06:59.042 free 0x200000a00000 4194304 00:06:59.042 unregister 0x200000800000 6291456 PASSED 00:06:59.042 malloc 8388608 00:06:59.042 register 0x200000400000 10485760 00:06:59.042 buf 0x200000600000 len 8388608 PASSED 00:06:59.042 free 0x200000600000 8388608 00:06:59.042 unregister 0x200000400000 10485760 PASSED 00:06:59.042 passed 00:06:59.042 00:06:59.042 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.042 suites 1 1 n/a 0 0 00:06:59.042 tests 1 1 1 0 0 00:06:59.042 asserts 15 15 15 0 n/a 00:06:59.042 00:06:59.042 Elapsed time = 0.010 seconds 00:06:59.042 00:06:59.042 real 0m0.070s 00:06:59.042 user 0m0.026s 00:06:59.042 sys 0m0.043s 00:06:59.042 10:34:38 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.042 10:34:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:59.042 ************************************ 00:06:59.042 END TEST env_mem_callbacks 00:06:59.042 ************************************ 00:06:59.042 00:06:59.042 real 0m7.542s 00:06:59.042 user 0m1.077s 00:06:59.042 sys 0m1.025s 00:06:59.042 10:34:38 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.042 10:34:38 env -- common/autotest_common.sh@10 -- # set +x 00:06:59.042 ************************************ 00:06:59.042 END TEST env 00:06:59.042 ************************************ 00:06:59.042 10:34:38 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:59.042 10:34:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.042 10:34:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.042 10:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.042 ************************************ 00:06:59.042 START TEST rpc 00:06:59.042 ************************************ 00:06:59.042 10:34:38 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:59.303 * Looking for test storage... 00:06:59.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:59.303 10:34:38 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:59.303 10:34:38 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:59.303 10:34:38 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:59.303 10:34:38 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:59.303 10:34:38 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.303 10:34:38 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.303 10:34:38 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.303 10:34:38 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.303 10:34:38 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.303 10:34:38 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.303 10:34:38 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.303 10:34:38 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.303 10:34:38 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.303 10:34:38 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.303 10:34:38 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.303 10:34:38 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:59.303 10:34:38 rpc -- scripts/common.sh@345 -- # : 1 00:06:59.303 10:34:38 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.303 10:34:38 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.303 10:34:38 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:59.303 10:34:38 rpc -- scripts/common.sh@353 -- # local d=1 00:06:59.303 10:34:38 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.303 10:34:38 rpc -- scripts/common.sh@355 -- # echo 1 00:06:59.303 10:34:38 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.303 10:34:38 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:59.303 10:34:38 rpc -- scripts/common.sh@353 -- # local d=2 00:06:59.303 10:34:38 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.303 10:34:38 rpc -- scripts/common.sh@355 -- # echo 2 00:06:59.303 10:34:38 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.303 10:34:38 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.303 10:34:38 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.303 10:34:38 rpc -- scripts/common.sh@368 -- # return 0 00:06:59.304 10:34:38 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.304 10:34:38 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:59.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.304 --rc genhtml_branch_coverage=1 00:06:59.304 --rc genhtml_function_coverage=1 00:06:59.304 --rc genhtml_legend=1 00:06:59.304 --rc geninfo_all_blocks=1 00:06:59.304 --rc geninfo_unexecuted_blocks=1 00:06:59.304 00:06:59.304 ' 00:06:59.304 10:34:38 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:59.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.304 --rc genhtml_branch_coverage=1 00:06:59.304 --rc genhtml_function_coverage=1 00:06:59.304 --rc genhtml_legend=1 00:06:59.304 --rc geninfo_all_blocks=1 00:06:59.304 --rc geninfo_unexecuted_blocks=1 00:06:59.304 00:06:59.304 ' 00:06:59.304 10:34:38 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:59.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.304 --rc genhtml_branch_coverage=1 00:06:59.304 --rc genhtml_function_coverage=1 00:06:59.304 --rc genhtml_legend=1 00:06:59.304 --rc geninfo_all_blocks=1 00:06:59.304 --rc geninfo_unexecuted_blocks=1 00:06:59.304 00:06:59.304 ' 00:06:59.304 10:34:38 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:59.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.304 --rc genhtml_branch_coverage=1 00:06:59.304 --rc genhtml_function_coverage=1 00:06:59.304 --rc genhtml_legend=1 00:06:59.304 --rc geninfo_all_blocks=1 00:06:59.304 --rc geninfo_unexecuted_blocks=1 00:06:59.304 00:06:59.304 ' 00:06:59.304 10:34:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=780732 00:06:59.304 10:34:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.304 10:34:38 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:59.304 10:34:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 780732 00:06:59.304 10:34:38 rpc -- common/autotest_common.sh@835 -- # '[' -z 780732 ']' 00:06:59.304 10:34:38 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.304 10:34:38 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.304 10:34:38 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.304 10:34:38 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.304 10:34:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.304 [2024-11-19 10:34:38.479616] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:06:59.304 [2024-11-19 10:34:38.479681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780732 ] 00:06:59.565 [2024-11-19 10:34:38.572228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.565 [2024-11-19 10:34:38.624774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:59.565 [2024-11-19 10:34:38.624828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 780732' to capture a snapshot of events at runtime. 00:06:59.565 [2024-11-19 10:34:38.624837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:59.565 [2024-11-19 10:34:38.624845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:59.565 [2024-11-19 10:34:38.624852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid780732 for offline analysis/debug. 00:06:59.565 [2024-11-19 10:34:38.625643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.138 10:34:39 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.138 10:34:39 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:00.138 10:34:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:00.139 10:34:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:00.139 10:34:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:00.139 10:34:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:00.139 10:34:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.139 10:34:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.139 10:34:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.399 ************************************ 00:07:00.399 START TEST rpc_integrity 00:07:00.399 ************************************ 00:07:00.399 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:00.399 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:00.399 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.399 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:00.399 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.399 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:00.399 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:00.399 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:00.399 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:00.399 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.399 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:00.399 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.399 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:00.399 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:00.399 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.399 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:00.399 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.399 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:00.399 { 00:07:00.399 "name": "Malloc0", 00:07:00.399 "aliases": [ 00:07:00.399 "f5bdcd06-e455-4581-8624-5f90e85421b0" 00:07:00.399 ], 00:07:00.399 "product_name": "Malloc disk", 00:07:00.399 "block_size": 512, 00:07:00.399 "num_blocks": 16384, 00:07:00.399 "uuid": "f5bdcd06-e455-4581-8624-5f90e85421b0", 00:07:00.399 "assigned_rate_limits": { 00:07:00.399 "rw_ios_per_sec": 0, 00:07:00.399 "rw_mbytes_per_sec": 0, 00:07:00.399 "r_mbytes_per_sec": 0, 00:07:00.399 "w_mbytes_per_sec": 0 00:07:00.399 }, 00:07:00.399 "claimed": false, 00:07:00.399 "zoned": false, 00:07:00.399 "supported_io_types": { 00:07:00.399 "read": true, 00:07:00.399 "write": true, 00:07:00.399 "unmap": true, 00:07:00.399 "flush": true, 00:07:00.399 "reset": true, 00:07:00.399 "nvme_admin": false, 00:07:00.399 "nvme_io": false, 00:07:00.399 "nvme_io_md": false, 00:07:00.399 "write_zeroes": true, 00:07:00.399 "zcopy": true, 00:07:00.399 "get_zone_info": false, 00:07:00.399 "zone_management": false, 00:07:00.399 "zone_append": false, 00:07:00.399 "compare": false, 00:07:00.399 "compare_and_write": false, 00:07:00.399 "abort": true, 00:07:00.399 "seek_hole": false, 00:07:00.399 "seek_data": false, 00:07:00.399 "copy": true, 00:07:00.399 "nvme_iov_md": false 00:07:00.399 }, 00:07:00.399 "memory_domains": [ 00:07:00.399 { 00:07:00.399 "dma_device_id": "system", 00:07:00.399 "dma_device_type": 1 00:07:00.399 }, 00:07:00.399 { 00:07:00.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.399 "dma_device_type": 2 00:07:00.399 } 00:07:00.399 ], 00:07:00.399 "driver_specific": {} 00:07:00.399 } 00:07:00.399 ]' 00:07:00.399 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:00.399 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:00.399 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:00.399 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.399 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:00.399 [2024-11-19 10:34:39.489681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:00.399 [2024-11-19 10:34:39.489733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.399 [2024-11-19 10:34:39.489750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f55db0 00:07:00.399 [2024-11-19 10:34:39.489758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.399 [2024-11-19 10:34:39.491357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.399 [2024-11-19 10:34:39.491393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:00.399 Passthru0 00:07:00.399 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.399 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:00.399 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.399 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:00.399 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.399 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:00.399 { 00:07:00.399 "name": "Malloc0", 00:07:00.399 "aliases": [ 00:07:00.399 "f5bdcd06-e455-4581-8624-5f90e85421b0" 00:07:00.399 ], 00:07:00.399 "product_name": "Malloc disk", 00:07:00.399 "block_size": 512, 00:07:00.399 "num_blocks": 16384, 00:07:00.399 "uuid": "f5bdcd06-e455-4581-8624-5f90e85421b0", 00:07:00.399 "assigned_rate_limits": { 00:07:00.399 "rw_ios_per_sec": 0, 00:07:00.399 "rw_mbytes_per_sec": 0, 00:07:00.399 "r_mbytes_per_sec": 0, 00:07:00.399 "w_mbytes_per_sec": 0 00:07:00.399 }, 00:07:00.399 "claimed": true, 00:07:00.399 "claim_type": "exclusive_write", 00:07:00.399 "zoned": false, 00:07:00.399 "supported_io_types": { 00:07:00.399 "read": true, 00:07:00.399 "write": true, 00:07:00.399 "unmap": true, 00:07:00.399 "flush": true, 00:07:00.399 "reset": true, 00:07:00.399 "nvme_admin": false, 00:07:00.399 "nvme_io": false, 00:07:00.399 "nvme_io_md": false, 00:07:00.399 "write_zeroes": true, 00:07:00.399 "zcopy": true, 00:07:00.399 "get_zone_info": false, 00:07:00.399 "zone_management": false, 00:07:00.399 "zone_append": false, 00:07:00.399 "compare": false, 00:07:00.399 "compare_and_write": false, 00:07:00.399 "abort": true, 00:07:00.399 "seek_hole": false, 00:07:00.399 "seek_data": false, 00:07:00.399 "copy": true, 00:07:00.400 "nvme_iov_md": false 00:07:00.400 }, 00:07:00.400 "memory_domains": [ 00:07:00.400 { 00:07:00.400 "dma_device_id": "system", 00:07:00.400 "dma_device_type": 1 00:07:00.400 }, 00:07:00.400 { 00:07:00.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.400 "dma_device_type": 2 00:07:00.400 } 00:07:00.400 ], 00:07:00.400 "driver_specific": {} 00:07:00.400 }, 00:07:00.400 { 00:07:00.400 "name": "Passthru0", 00:07:00.400 "aliases": [ 00:07:00.400 "66dcb3c2-2c84-581e-a112-6b3564d5a13f" 00:07:00.400 ], 00:07:00.400 "product_name": "passthru", 00:07:00.400 "block_size": 512, 00:07:00.400 "num_blocks": 16384, 00:07:00.400 "uuid": "66dcb3c2-2c84-581e-a112-6b3564d5a13f", 00:07:00.400 "assigned_rate_limits": { 00:07:00.400 "rw_ios_per_sec": 0, 00:07:00.400 "rw_mbytes_per_sec": 0, 00:07:00.400 "r_mbytes_per_sec": 0, 00:07:00.400 "w_mbytes_per_sec": 0 00:07:00.400 }, 00:07:00.400 "claimed": false, 00:07:00.400 "zoned": false, 00:07:00.400 "supported_io_types": { 00:07:00.400 "read": true, 00:07:00.400 "write": true, 00:07:00.400 "unmap": true, 00:07:00.400 "flush": true, 00:07:00.400 "reset": true, 00:07:00.400 "nvme_admin": false, 00:07:00.400 "nvme_io": false, 00:07:00.400 "nvme_io_md": false, 00:07:00.400 "write_zeroes": true, 00:07:00.400 "zcopy": true, 00:07:00.400 "get_zone_info": false, 00:07:00.400 "zone_management": false, 00:07:00.400 "zone_append": false, 00:07:00.400 "compare": false, 00:07:00.400 "compare_and_write": false, 00:07:00.400 "abort": true, 00:07:00.400 "seek_hole": false, 00:07:00.400 "seek_data": false, 00:07:00.400 "copy": true, 00:07:00.400 "nvme_iov_md": false 00:07:00.400 }, 00:07:00.400 "memory_domains": [ 00:07:00.400 { 00:07:00.400 "dma_device_id": "system", 00:07:00.400 "dma_device_type": 1 00:07:00.400 }, 00:07:00.400 { 00:07:00.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.400 "dma_device_type": 2 00:07:00.400 } 00:07:00.400 ], 00:07:00.400 "driver_specific": { 00:07:00.400 "passthru": { 00:07:00.400 "name": "Passthru0", 00:07:00.400 "base_bdev_name": "Malloc0" 00:07:00.400 } 00:07:00.400 } 00:07:00.400 } 00:07:00.400 ]' 00:07:00.400 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:00.400 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:00.400 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:00.400 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.400 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:00.400 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.400 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:00.400 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.400 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:00.661 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.661 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:00.661 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.661 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:00.661 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.661 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:00.661 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:00.661 10:34:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:00.661 00:07:00.661 real 0m0.314s 00:07:00.661 user 0m0.190s 00:07:00.661 sys 0m0.050s 00:07:00.661 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.661 10:34:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:00.661 ************************************ 00:07:00.661 END TEST rpc_integrity 00:07:00.661 ************************************ 00:07:00.661 10:34:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:00.661 10:34:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.661 10:34:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.661 10:34:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.661 ************************************ 00:07:00.661 START TEST rpc_plugins 00:07:00.661 ************************************ 00:07:00.661 10:34:39 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:00.661 10:34:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:00.661 10:34:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.661 10:34:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:00.661 10:34:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.661 10:34:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:00.661 10:34:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:00.661 10:34:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.661 10:34:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:00.661 10:34:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.661 10:34:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:00.661 { 00:07:00.661 "name": "Malloc1", 00:07:00.661 "aliases": [ 00:07:00.661 "6807fa09-37b8-46a9-99b3-c483cf4881de" 00:07:00.661 ], 00:07:00.661 "product_name": "Malloc disk", 00:07:00.661 "block_size": 4096, 00:07:00.661 "num_blocks": 256, 00:07:00.661 "uuid": "6807fa09-37b8-46a9-99b3-c483cf4881de", 00:07:00.661 "assigned_rate_limits": { 00:07:00.661 "rw_ios_per_sec": 0, 00:07:00.661 "rw_mbytes_per_sec": 0, 00:07:00.661 "r_mbytes_per_sec": 0, 00:07:00.661 "w_mbytes_per_sec": 0 00:07:00.661 }, 00:07:00.661 "claimed": false, 00:07:00.661 "zoned": false, 00:07:00.661 "supported_io_types": { 00:07:00.661 "read": true, 00:07:00.661 "write": true, 00:07:00.661 "unmap": true, 00:07:00.661 "flush": true, 00:07:00.661 "reset": true, 00:07:00.661 "nvme_admin": false, 00:07:00.661 "nvme_io": false, 00:07:00.661 "nvme_io_md": false, 00:07:00.661 "write_zeroes": true, 00:07:00.661 "zcopy": true, 00:07:00.661 "get_zone_info": false, 00:07:00.661 "zone_management": false, 00:07:00.661 "zone_append": false, 00:07:00.661 "compare": false, 00:07:00.661 "compare_and_write": false, 00:07:00.661 "abort": true, 00:07:00.661 "seek_hole": false, 00:07:00.661 "seek_data": false, 00:07:00.661 "copy": true, 00:07:00.661 "nvme_iov_md": false 00:07:00.661 }, 00:07:00.661 "memory_domains": [ 00:07:00.661 { 00:07:00.661 "dma_device_id": "system", 00:07:00.661 "dma_device_type": 1 00:07:00.661 }, 00:07:00.661 { 00:07:00.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.661 "dma_device_type": 2 00:07:00.661 } 00:07:00.661 ], 00:07:00.661 "driver_specific": {} 00:07:00.661 } 00:07:00.661 ]' 00:07:00.661 10:34:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:00.661 10:34:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:00.661 10:34:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:00.661 10:34:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.661 10:34:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:00.661 10:34:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.661 10:34:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:00.661 10:34:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.661 10:34:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:00.661 10:34:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.661 10:34:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:00.661 10:34:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:00.924 10:34:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:00.924 00:07:00.924 real 0m0.154s 00:07:00.924 user 0m0.097s 00:07:00.924 sys 0m0.020s 00:07:00.924 10:34:39 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.924 10:34:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:00.924 ************************************ 00:07:00.924 END TEST rpc_plugins 00:07:00.924 ************************************ 00:07:00.924 10:34:39 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:00.924 10:34:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.924 10:34:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.924 10:34:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.924 ************************************ 00:07:00.924 START TEST rpc_trace_cmd_test 00:07:00.924 ************************************ 00:07:00.924 10:34:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:00.924 10:34:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:00.924 10:34:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:00.924 10:34:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.924 10:34:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.924 10:34:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.924 10:34:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:00.924 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid780732", 00:07:00.924 "tpoint_group_mask": "0x8", 00:07:00.924 "iscsi_conn": { 00:07:00.924 "mask": "0x2", 00:07:00.924 "tpoint_mask": "0x0" 00:07:00.924 }, 00:07:00.924 "scsi": { 00:07:00.924 "mask": "0x4", 00:07:00.924 "tpoint_mask": "0x0" 00:07:00.924 }, 00:07:00.924 "bdev": { 00:07:00.924 "mask": "0x8", 00:07:00.924 "tpoint_mask": "0xffffffffffffffff" 00:07:00.924 }, 00:07:00.924 "nvmf_rdma": { 00:07:00.924 "mask": "0x10", 00:07:00.924 "tpoint_mask": "0x0" 00:07:00.924 }, 00:07:00.924 "nvmf_tcp": { 00:07:00.924 "mask": "0x20", 00:07:00.924 "tpoint_mask": "0x0" 00:07:00.924 }, 00:07:00.924 "ftl": { 00:07:00.924 "mask": "0x40", 00:07:00.924 "tpoint_mask": "0x0" 00:07:00.924 }, 00:07:00.924 "blobfs": { 00:07:00.924 "mask": "0x80", 00:07:00.924 "tpoint_mask": "0x0" 00:07:00.924 }, 00:07:00.924 "dsa": { 00:07:00.924 "mask": "0x200", 00:07:00.924 "tpoint_mask": "0x0" 00:07:00.924 }, 00:07:00.924 "thread": { 00:07:00.924 "mask": "0x400", 00:07:00.924 "tpoint_mask": "0x0" 00:07:00.924 }, 00:07:00.924 "nvme_pcie": { 00:07:00.924 "mask": "0x800", 00:07:00.924 "tpoint_mask": "0x0" 00:07:00.924 }, 00:07:00.924 "iaa": { 00:07:00.924 "mask": "0x1000", 00:07:00.924 "tpoint_mask": "0x0" 00:07:00.924 }, 00:07:00.924 "nvme_tcp": { 00:07:00.924 "mask": "0x2000", 00:07:00.924 "tpoint_mask": "0x0" 00:07:00.924 }, 00:07:00.924 "bdev_nvme": { 00:07:00.924 "mask": "0x4000", 00:07:00.924 "tpoint_mask": "0x0" 00:07:00.924 }, 00:07:00.924 "sock": { 00:07:00.924 "mask": "0x8000", 00:07:00.924 "tpoint_mask": "0x0" 00:07:00.924 }, 00:07:00.924 "blob": { 00:07:00.924 "mask": "0x10000", 00:07:00.924 "tpoint_mask": "0x0" 00:07:00.924 }, 00:07:00.924 "bdev_raid": { 00:07:00.924 "mask": "0x20000", 00:07:00.924 "tpoint_mask": "0x0" 00:07:00.924 }, 00:07:00.924 "scheduler": { 00:07:00.924 "mask": "0x40000", 00:07:00.924 "tpoint_mask": "0x0" 00:07:00.924 } 00:07:00.924 }' 00:07:00.924 10:34:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:00.924 10:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:00.924 10:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:00.924 10:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:00.924 10:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:01.187 10:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:01.187 10:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:01.187 10:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:01.187 10:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:01.187 10:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:01.187 00:07:01.187 real 0m0.256s 00:07:01.187 user 0m0.207s 00:07:01.187 sys 0m0.038s 00:07:01.187 10:34:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.187 10:34:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.187 ************************************ 00:07:01.187 END TEST rpc_trace_cmd_test 00:07:01.187 ************************************ 00:07:01.187 10:34:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:01.187 10:34:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:01.187 10:34:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:01.187 10:34:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.187 10:34:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.187 10:34:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.187 ************************************ 00:07:01.187 START TEST rpc_daemon_integrity 00:07:01.187 ************************************ 00:07:01.187 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:01.187 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:01.187 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.187 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:01.187 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.187 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:01.187 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:01.187 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:01.187 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:01.187 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.187 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:01.187 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.187 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:01.187 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:01.187 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.187 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:01.450 { 00:07:01.450 "name": "Malloc2", 00:07:01.450 "aliases": [ 00:07:01.450 "4b638fcb-d7e1-4389-97e0-f215f4574036" 00:07:01.450 ], 00:07:01.450 "product_name": "Malloc disk", 00:07:01.450 "block_size": 512, 00:07:01.450 "num_blocks": 16384, 00:07:01.450 "uuid": "4b638fcb-d7e1-4389-97e0-f215f4574036", 00:07:01.450 "assigned_rate_limits": { 00:07:01.450 "rw_ios_per_sec": 0, 00:07:01.450 "rw_mbytes_per_sec": 0, 00:07:01.450 "r_mbytes_per_sec": 0, 00:07:01.450 "w_mbytes_per_sec": 0 00:07:01.450 }, 00:07:01.450 "claimed": false, 00:07:01.450 "zoned": false, 00:07:01.450 "supported_io_types": { 00:07:01.450 "read": true, 00:07:01.450 "write": true, 00:07:01.450 "unmap": true, 00:07:01.450 "flush": true, 00:07:01.450 "reset": true, 00:07:01.450 "nvme_admin": false, 00:07:01.450 "nvme_io": false, 00:07:01.450 "nvme_io_md": false, 00:07:01.450 "write_zeroes": true, 00:07:01.450 "zcopy": true, 00:07:01.450 "get_zone_info": false, 00:07:01.450 "zone_management": false, 00:07:01.450 "zone_append": false, 00:07:01.450 "compare": false, 00:07:01.450 "compare_and_write": false, 00:07:01.450 "abort": true, 00:07:01.450 "seek_hole": false, 00:07:01.450 "seek_data": false, 00:07:01.450 "copy": true, 00:07:01.450 "nvme_iov_md": false 00:07:01.450 }, 00:07:01.450 "memory_domains": [ 00:07:01.450 { 00:07:01.450 "dma_device_id": "system", 00:07:01.450 "dma_device_type": 1 00:07:01.450 }, 00:07:01.450 { 00:07:01.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.450 "dma_device_type": 2 00:07:01.450 } 00:07:01.450 ], 00:07:01.450 "driver_specific": {} 00:07:01.450 } 00:07:01.450 ]' 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:01.450 [2024-11-19 10:34:40.452310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:01.450 [2024-11-19 10:34:40.452363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:01.450 [2024-11-19 10:34:40.452381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20868d0 00:07:01.450 [2024-11-19 10:34:40.452389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:01.450 [2024-11-19 10:34:40.453887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:01.450 [2024-11-19 10:34:40.453924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:01.450 Passthru0 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:01.450 { 00:07:01.450 "name": "Malloc2", 00:07:01.450 "aliases": [ 00:07:01.450 "4b638fcb-d7e1-4389-97e0-f215f4574036" 00:07:01.450 ], 00:07:01.450 "product_name": "Malloc disk", 00:07:01.450 "block_size": 512, 00:07:01.450 "num_blocks": 16384, 00:07:01.450 "uuid": "4b638fcb-d7e1-4389-97e0-f215f4574036", 00:07:01.450 "assigned_rate_limits": { 00:07:01.450 "rw_ios_per_sec": 0, 00:07:01.450 "rw_mbytes_per_sec": 0, 00:07:01.450 "r_mbytes_per_sec": 0, 00:07:01.450 "w_mbytes_per_sec": 0 00:07:01.450 }, 00:07:01.450 "claimed": true, 00:07:01.450 "claim_type": "exclusive_write", 00:07:01.450 "zoned": false, 00:07:01.450 "supported_io_types": { 00:07:01.450 "read": true, 00:07:01.450 "write": true, 00:07:01.450 "unmap": true, 00:07:01.450 "flush": true, 00:07:01.450 "reset": true, 00:07:01.450 "nvme_admin": false, 00:07:01.450 "nvme_io": false, 00:07:01.450 "nvme_io_md": false, 00:07:01.450 "write_zeroes": true, 00:07:01.450 "zcopy": true, 00:07:01.450 "get_zone_info": false, 00:07:01.450 "zone_management": false, 00:07:01.450 "zone_append": false, 00:07:01.450 "compare": false, 00:07:01.450 "compare_and_write": false, 00:07:01.450 "abort": true, 00:07:01.450 "seek_hole": false, 00:07:01.450 "seek_data": false, 00:07:01.450 "copy": true, 00:07:01.450 "nvme_iov_md": false 00:07:01.450 }, 00:07:01.450 "memory_domains": [ 00:07:01.450 { 00:07:01.450 "dma_device_id": "system", 00:07:01.450 "dma_device_type": 1 00:07:01.450 }, 00:07:01.450 { 00:07:01.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.450 "dma_device_type": 2 00:07:01.450 } 00:07:01.450 ], 00:07:01.450 "driver_specific": {} 00:07:01.450 }, 00:07:01.450 { 00:07:01.450 "name": "Passthru0", 00:07:01.450 "aliases": [ 00:07:01.450 "4ceaf4c2-f416-5970-ad75-0638970433cc" 00:07:01.450 ], 00:07:01.450 "product_name": "passthru", 00:07:01.450 "block_size": 512, 00:07:01.450 "num_blocks": 16384, 00:07:01.450 "uuid": "4ceaf4c2-f416-5970-ad75-0638970433cc", 00:07:01.450 "assigned_rate_limits": { 00:07:01.450 "rw_ios_per_sec": 0, 00:07:01.450 "rw_mbytes_per_sec": 0, 00:07:01.450 "r_mbytes_per_sec": 0, 00:07:01.450 "w_mbytes_per_sec": 0 00:07:01.450 }, 00:07:01.450 "claimed": false, 00:07:01.450 "zoned": false, 00:07:01.450 "supported_io_types": { 00:07:01.450 "read": true, 00:07:01.450 "write": true, 00:07:01.450 "unmap": true, 00:07:01.450 "flush": true, 00:07:01.450 "reset": true, 00:07:01.450 "nvme_admin": false, 00:07:01.450 "nvme_io": false, 00:07:01.450 "nvme_io_md": false, 00:07:01.450 "write_zeroes": true, 00:07:01.450 "zcopy": true, 00:07:01.450 "get_zone_info": false, 00:07:01.450 "zone_management": false, 00:07:01.450 "zone_append": false, 00:07:01.450 "compare": false, 00:07:01.450 "compare_and_write": false, 00:07:01.450 "abort": true, 00:07:01.450 "seek_hole": false, 00:07:01.450 "seek_data": false, 00:07:01.450 "copy": true, 00:07:01.450 "nvme_iov_md": false 00:07:01.450 }, 00:07:01.450 "memory_domains": [ 00:07:01.450 { 00:07:01.450 "dma_device_id": "system", 00:07:01.450 "dma_device_type": 1 00:07:01.450 }, 00:07:01.450 { 00:07:01.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.450 "dma_device_type": 2 00:07:01.450 } 00:07:01.450 ], 00:07:01.450 "driver_specific": { 00:07:01.450 "passthru": { 00:07:01.450 "name": "Passthru0", 00:07:01.450 "base_bdev_name": "Malloc2" 00:07:01.450 } 00:07:01.450 } 00:07:01.450 } 00:07:01.450 ]' 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:01.450 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.451 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:01.451 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.451 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:01.451 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:01.451 10:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:01.451 00:07:01.451 real 0m0.305s 00:07:01.451 user 0m0.183s 00:07:01.451 sys 0m0.055s 00:07:01.451 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.451 10:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:01.451 ************************************ 00:07:01.451 END TEST rpc_daemon_integrity 00:07:01.451 ************************************ 00:07:01.711 10:34:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:01.711 10:34:40 rpc -- rpc/rpc.sh@84 -- # killprocess 780732 00:07:01.711 10:34:40 rpc -- common/autotest_common.sh@954 -- # '[' -z 780732 ']' 00:07:01.711 10:34:40 rpc -- common/autotest_common.sh@958 -- # kill -0 780732 00:07:01.711 10:34:40 rpc -- common/autotest_common.sh@959 -- # uname 00:07:01.711 10:34:40 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.711 10:34:40 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 780732 00:07:01.711 10:34:40 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.711 10:34:40 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.711 10:34:40 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 780732' 00:07:01.711 killing process with pid 780732 00:07:01.711 10:34:40 rpc -- common/autotest_common.sh@973 -- # kill 780732 00:07:01.711 10:34:40 rpc -- common/autotest_common.sh@978 -- # wait 780732 00:07:01.974 00:07:01.974 real 0m2.736s 00:07:01.974 user 0m3.463s 00:07:01.974 sys 0m0.873s 00:07:01.974 10:34:40 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.974 10:34:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.974 ************************************ 00:07:01.974 END TEST rpc 00:07:01.974 ************************************ 00:07:01.974 10:34:40 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:01.974 10:34:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.974 10:34:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.974 10:34:40 -- common/autotest_common.sh@10 -- # set +x 00:07:01.974 ************************************ 00:07:01.974 START TEST skip_rpc 00:07:01.974 ************************************ 00:07:01.974 10:34:41 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:01.974 * Looking for test storage... 00:07:01.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:01.974 10:34:41 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.974 10:34:41 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.974 10:34:41 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:02.238 10:34:41 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.238 10:34:41 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:02.238 10:34:41 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.238 10:34:41 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:02.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.238 --rc genhtml_branch_coverage=1 00:07:02.238 --rc genhtml_function_coverage=1 00:07:02.238 --rc genhtml_legend=1 00:07:02.238 --rc geninfo_all_blocks=1 00:07:02.238 --rc geninfo_unexecuted_blocks=1 00:07:02.238 00:07:02.238 ' 00:07:02.238 10:34:41 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:02.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.238 --rc genhtml_branch_coverage=1 00:07:02.238 --rc genhtml_function_coverage=1 00:07:02.238 --rc genhtml_legend=1 00:07:02.238 --rc geninfo_all_blocks=1 00:07:02.238 --rc geninfo_unexecuted_blocks=1 00:07:02.238 00:07:02.238 ' 00:07:02.238 10:34:41 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:02.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.238 --rc genhtml_branch_coverage=1 00:07:02.238 --rc genhtml_function_coverage=1 00:07:02.238 --rc genhtml_legend=1 00:07:02.238 --rc geninfo_all_blocks=1 00:07:02.238 --rc geninfo_unexecuted_blocks=1 00:07:02.238 00:07:02.238 ' 00:07:02.238 10:34:41 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:02.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.238 --rc genhtml_branch_coverage=1 00:07:02.238 --rc genhtml_function_coverage=1 00:07:02.238 --rc genhtml_legend=1 00:07:02.238 --rc geninfo_all_blocks=1 00:07:02.238 --rc geninfo_unexecuted_blocks=1 00:07:02.238 00:07:02.238 ' 00:07:02.238 10:34:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:02.238 10:34:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:02.238 10:34:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:02.238 10:34:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.238 10:34:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.238 10:34:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.238 ************************************ 00:07:02.238 START TEST skip_rpc 00:07:02.238 ************************************ 00:07:02.238 10:34:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:02.238 10:34:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=781577 00:07:02.238 10:34:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:02.238 10:34:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:02.238 10:34:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:02.238 [2024-11-19 10:34:41.339302] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:07:02.238 [2024-11-19 10:34:41.339358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781577 ] 00:07:02.238 [2024-11-19 10:34:41.432221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.498 [2024-11-19 10:34:41.484551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 781577 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 781577 ']' 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 781577 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 781577 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 781577' 00:07:07.785 killing process with pid 781577 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 781577 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 781577 00:07:07.785 00:07:07.785 real 0m5.263s 00:07:07.785 user 0m5.013s 00:07:07.785 sys 0m0.290s 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.785 10:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.785 ************************************ 00:07:07.785 END TEST skip_rpc 00:07:07.785 ************************************ 00:07:07.785 10:34:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:07.785 10:34:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.785 10:34:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.785 10:34:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.785 ************************************ 00:07:07.785 START TEST skip_rpc_with_json 00:07:07.785 ************************************ 00:07:07.785 10:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:07.785 10:34:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:07.785 10:34:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=782620 00:07:07.785 10:34:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:07.785 10:34:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 782620 00:07:07.785 10:34:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:07.785 10:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 782620 ']' 00:07:07.785 10:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.785 10:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.785 10:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.785 10:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.785 10:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:07.785 [2024-11-19 10:34:46.681434] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:07:07.785 [2024-11-19 10:34:46.681492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782620 ] 00:07:07.785 [2024-11-19 10:34:46.767850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.785 [2024-11-19 10:34:46.801264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.354 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.354 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:08.354 10:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:08.354 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.354 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:08.354 [2024-11-19 10:34:47.469445] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:08.354 request: 00:07:08.354 { 00:07:08.354 "trtype": "tcp", 00:07:08.354 "method": "nvmf_get_transports", 00:07:08.354 "req_id": 1 00:07:08.354 } 00:07:08.354 Got JSON-RPC error response 00:07:08.354 response: 00:07:08.354 { 00:07:08.354 "code": -19, 00:07:08.354 "message": "No such device" 00:07:08.354 } 00:07:08.354 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:08.354 10:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:08.354 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.354 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:08.354 [2024-11-19 10:34:47.481539] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.354 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.354 10:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:08.354 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.354 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:08.614 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.614 10:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:08.614 { 00:07:08.614 "subsystems": [ 00:07:08.614 { 00:07:08.614 "subsystem": "fsdev", 00:07:08.614 "config": [ 00:07:08.614 { 00:07:08.614 "method": "fsdev_set_opts", 00:07:08.614 "params": { 00:07:08.614 "fsdev_io_pool_size": 65535, 00:07:08.614 "fsdev_io_cache_size": 256 00:07:08.614 } 00:07:08.614 } 00:07:08.614 ] 00:07:08.614 }, 00:07:08.614 { 00:07:08.614 "subsystem": "vfio_user_target", 00:07:08.614 "config": null 00:07:08.614 }, 00:07:08.614 { 00:07:08.614 "subsystem": "keyring", 00:07:08.614 "config": [] 00:07:08.614 }, 00:07:08.614 { 00:07:08.614 "subsystem": "iobuf", 00:07:08.614 "config": [ 00:07:08.614 { 00:07:08.614 "method": "iobuf_set_options", 00:07:08.614 "params": { 00:07:08.614 "small_pool_count": 8192, 00:07:08.614 "large_pool_count": 1024, 00:07:08.614 "small_bufsize": 8192, 00:07:08.614 "large_bufsize": 135168, 00:07:08.614 "enable_numa": false 00:07:08.614 } 00:07:08.614 } 00:07:08.614 ] 00:07:08.614 }, 00:07:08.614 { 00:07:08.614 "subsystem": "sock", 00:07:08.614 "config": [ 00:07:08.614 { 00:07:08.614 "method": "sock_set_default_impl", 00:07:08.614 "params": { 00:07:08.614 "impl_name": "posix" 00:07:08.614 } 00:07:08.614 }, 00:07:08.614 { 00:07:08.614 "method": "sock_impl_set_options", 00:07:08.614 "params": { 00:07:08.614 "impl_name": "ssl", 00:07:08.614 "recv_buf_size": 4096, 00:07:08.614 "send_buf_size": 4096, 00:07:08.615 "enable_recv_pipe": true, 00:07:08.615 "enable_quickack": false, 00:07:08.615 "enable_placement_id": 0, 00:07:08.615 "enable_zerocopy_send_server": true, 00:07:08.615 "enable_zerocopy_send_client": false, 00:07:08.615 "zerocopy_threshold": 0, 00:07:08.615 "tls_version": 0, 00:07:08.615 "enable_ktls": false 00:07:08.615 } 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "method": "sock_impl_set_options", 00:07:08.615 "params": { 00:07:08.615 "impl_name": "posix", 00:07:08.615 "recv_buf_size": 2097152, 00:07:08.615 "send_buf_size": 2097152, 00:07:08.615 "enable_recv_pipe": true, 00:07:08.615 "enable_quickack": false, 00:07:08.615 "enable_placement_id": 0, 00:07:08.615 "enable_zerocopy_send_server": true, 00:07:08.615 "enable_zerocopy_send_client": false, 00:07:08.615 "zerocopy_threshold": 0, 00:07:08.615 "tls_version": 0, 00:07:08.615 "enable_ktls": false 00:07:08.615 } 00:07:08.615 } 00:07:08.615 ] 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "subsystem": "vmd", 00:07:08.615 "config": [] 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "subsystem": "accel", 00:07:08.615 "config": [ 00:07:08.615 { 00:07:08.615 "method": "accel_set_options", 00:07:08.615 "params": { 00:07:08.615 "small_cache_size": 128, 00:07:08.615 "large_cache_size": 16, 00:07:08.615 "task_count": 2048, 00:07:08.615 "sequence_count": 2048, 00:07:08.615 "buf_count": 2048 00:07:08.615 } 00:07:08.615 } 00:07:08.615 ] 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "subsystem": "bdev", 00:07:08.615 "config": [ 00:07:08.615 { 00:07:08.615 "method": "bdev_set_options", 00:07:08.615 "params": { 00:07:08.615 "bdev_io_pool_size": 65535, 00:07:08.615 "bdev_io_cache_size": 256, 00:07:08.615 "bdev_auto_examine": true, 00:07:08.615 "iobuf_small_cache_size": 128, 00:07:08.615 "iobuf_large_cache_size": 16 00:07:08.615 } 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "method": "bdev_raid_set_options", 00:07:08.615 "params": { 00:07:08.615 "process_window_size_kb": 1024, 00:07:08.615 "process_max_bandwidth_mb_sec": 0 00:07:08.615 } 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "method": "bdev_iscsi_set_options", 00:07:08.615 "params": { 00:07:08.615 "timeout_sec": 30 00:07:08.615 } 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "method": "bdev_nvme_set_options", 00:07:08.615 "params": { 00:07:08.615 "action_on_timeout": "none", 00:07:08.615 "timeout_us": 0, 00:07:08.615 "timeout_admin_us": 0, 00:07:08.615 "keep_alive_timeout_ms": 10000, 00:07:08.615 "arbitration_burst": 0, 00:07:08.615 "low_priority_weight": 0, 00:07:08.615 "medium_priority_weight": 0, 00:07:08.615 "high_priority_weight": 0, 00:07:08.615 "nvme_adminq_poll_period_us": 10000, 00:07:08.615 "nvme_ioq_poll_period_us": 0, 00:07:08.615 "io_queue_requests": 0, 00:07:08.615 "delay_cmd_submit": true, 00:07:08.615 "transport_retry_count": 4, 00:07:08.615 "bdev_retry_count": 3, 00:07:08.615 "transport_ack_timeout": 0, 00:07:08.615 "ctrlr_loss_timeout_sec": 0, 00:07:08.615 "reconnect_delay_sec": 0, 00:07:08.615 "fast_io_fail_timeout_sec": 0, 00:07:08.615 "disable_auto_failback": false, 00:07:08.615 "generate_uuids": false, 00:07:08.615 "transport_tos": 0, 00:07:08.615 "nvme_error_stat": false, 00:07:08.615 "rdma_srq_size": 0, 00:07:08.615 "io_path_stat": false, 00:07:08.615 "allow_accel_sequence": false, 00:07:08.615 "rdma_max_cq_size": 0, 00:07:08.615 "rdma_cm_event_timeout_ms": 0, 00:07:08.615 "dhchap_digests": [ 00:07:08.615 "sha256", 00:07:08.615 "sha384", 00:07:08.615 "sha512" 00:07:08.615 ], 00:07:08.615 "dhchap_dhgroups": [ 00:07:08.615 "null", 00:07:08.615 "ffdhe2048", 00:07:08.615 "ffdhe3072", 00:07:08.615 "ffdhe4096", 00:07:08.615 "ffdhe6144", 00:07:08.615 "ffdhe8192" 00:07:08.615 ] 00:07:08.615 } 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "method": "bdev_nvme_set_hotplug", 00:07:08.615 "params": { 00:07:08.615 "period_us": 100000, 00:07:08.615 "enable": false 00:07:08.615 } 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "method": "bdev_wait_for_examine" 00:07:08.615 } 00:07:08.615 ] 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "subsystem": "scsi", 00:07:08.615 "config": null 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "subsystem": "scheduler", 00:07:08.615 "config": [ 00:07:08.615 { 00:07:08.615 "method": "framework_set_scheduler", 00:07:08.615 "params": { 00:07:08.615 "name": "static" 00:07:08.615 } 00:07:08.615 } 00:07:08.615 ] 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "subsystem": "vhost_scsi", 00:07:08.615 "config": [] 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "subsystem": "vhost_blk", 00:07:08.615 "config": [] 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "subsystem": "ublk", 00:07:08.615 "config": [] 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "subsystem": "nbd", 00:07:08.615 "config": [] 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "subsystem": "nvmf", 00:07:08.615 "config": [ 00:07:08.615 { 00:07:08.615 "method": "nvmf_set_config", 00:07:08.615 "params": { 00:07:08.615 "discovery_filter": "match_any", 00:07:08.615 "admin_cmd_passthru": { 00:07:08.615 "identify_ctrlr": false 00:07:08.615 }, 00:07:08.615 "dhchap_digests": [ 00:07:08.615 "sha256", 00:07:08.615 "sha384", 00:07:08.615 "sha512" 00:07:08.615 ], 00:07:08.615 "dhchap_dhgroups": [ 00:07:08.615 "null", 00:07:08.615 "ffdhe2048", 00:07:08.615 "ffdhe3072", 00:07:08.615 "ffdhe4096", 00:07:08.615 "ffdhe6144", 00:07:08.615 "ffdhe8192" 00:07:08.615 ] 00:07:08.615 } 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "method": "nvmf_set_max_subsystems", 00:07:08.615 "params": { 00:07:08.615 "max_subsystems": 1024 00:07:08.615 } 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "method": "nvmf_set_crdt", 00:07:08.615 "params": { 00:07:08.615 "crdt1": 0, 00:07:08.615 "crdt2": 0, 00:07:08.615 "crdt3": 0 00:07:08.615 } 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "method": "nvmf_create_transport", 00:07:08.615 "params": { 00:07:08.615 "trtype": "TCP", 00:07:08.615 "max_queue_depth": 128, 00:07:08.615 "max_io_qpairs_per_ctrlr": 127, 00:07:08.615 "in_capsule_data_size": 4096, 00:07:08.615 "max_io_size": 131072, 00:07:08.615 "io_unit_size": 131072, 00:07:08.615 "max_aq_depth": 128, 00:07:08.615 "num_shared_buffers": 511, 00:07:08.615 "buf_cache_size": 4294967295, 00:07:08.615 "dif_insert_or_strip": false, 00:07:08.615 "zcopy": false, 00:07:08.615 "c2h_success": true, 00:07:08.615 "sock_priority": 0, 00:07:08.615 "abort_timeout_sec": 1, 00:07:08.615 "ack_timeout": 0, 00:07:08.615 "data_wr_pool_size": 0 00:07:08.615 } 00:07:08.615 } 00:07:08.615 ] 00:07:08.615 }, 00:07:08.615 { 00:07:08.615 "subsystem": "iscsi", 00:07:08.615 "config": [ 00:07:08.615 { 00:07:08.615 "method": "iscsi_set_options", 00:07:08.615 "params": { 00:07:08.615 "node_base": "iqn.2016-06.io.spdk", 00:07:08.615 "max_sessions": 128, 00:07:08.615 "max_connections_per_session": 2, 00:07:08.615 "max_queue_depth": 64, 00:07:08.615 "default_time2wait": 2, 00:07:08.615 "default_time2retain": 20, 00:07:08.615 "first_burst_length": 8192, 00:07:08.615 "immediate_data": true, 00:07:08.615 "allow_duplicated_isid": false, 00:07:08.615 "error_recovery_level": 0, 00:07:08.615 "nop_timeout": 60, 00:07:08.615 "nop_in_interval": 30, 00:07:08.615 "disable_chap": false, 00:07:08.615 "require_chap": false, 00:07:08.615 "mutual_chap": false, 00:07:08.615 "chap_group": 0, 00:07:08.615 "max_large_datain_per_connection": 64, 00:07:08.615 "max_r2t_per_connection": 4, 00:07:08.615 "pdu_pool_size": 36864, 00:07:08.615 "immediate_data_pool_size": 16384, 00:07:08.615 "data_out_pool_size": 2048 00:07:08.615 } 00:07:08.615 } 00:07:08.615 ] 00:07:08.615 } 00:07:08.615 ] 00:07:08.615 } 00:07:08.615 10:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:08.615 10:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 782620 00:07:08.615 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 782620 ']' 00:07:08.615 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 782620 00:07:08.615 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:08.616 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.616 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 782620 00:07:08.616 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.616 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.616 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 782620' 00:07:08.616 killing process with pid 782620 00:07:08.616 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 782620 00:07:08.616 10:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 782620 00:07:08.876 10:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=782961 00:07:08.876 10:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:08.876 10:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:14.160 10:34:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 782961 00:07:14.160 10:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 782961 ']' 00:07:14.160 10:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 782961 00:07:14.160 10:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:14.160 10:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.160 10:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 782961 00:07:14.160 10:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.160 10:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.160 10:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 782961' 00:07:14.160 killing process with pid 782961 00:07:14.160 10:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 782961 00:07:14.160 10:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 782961 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:14.160 00:07:14.160 real 0m6.551s 00:07:14.160 user 0m6.481s 00:07:14.160 sys 0m0.539s 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:14.160 ************************************ 00:07:14.160 END TEST skip_rpc_with_json 00:07:14.160 ************************************ 00:07:14.160 10:34:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:14.160 10:34:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.160 10:34:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.160 10:34:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.160 ************************************ 00:07:14.160 START TEST skip_rpc_with_delay 00:07:14.160 ************************************ 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:14.160 [2024-11-19 10:34:53.315395] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:14.160 00:07:14.160 real 0m0.078s 00:07:14.160 user 0m0.050s 00:07:14.160 sys 0m0.028s 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.160 10:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:14.160 ************************************ 00:07:14.160 END TEST skip_rpc_with_delay 00:07:14.160 ************************************ 00:07:14.421 10:34:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:14.421 10:34:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:14.421 10:34:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:14.421 10:34:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.421 10:34:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.421 10:34:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.421 ************************************ 00:07:14.421 START TEST exit_on_failed_rpc_init 00:07:14.421 ************************************ 00:07:14.421 10:34:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:14.421 10:34:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=784022 00:07:14.421 10:34:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 784022 00:07:14.421 10:34:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:14.421 10:34:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 784022 ']' 00:07:14.421 10:34:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.421 10:34:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.421 10:34:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.421 10:34:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.421 10:34:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:14.421 [2024-11-19 10:34:53.478269] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:07:14.421 [2024-11-19 10:34:53.478330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784022 ] 00:07:14.421 [2024-11-19 10:34:53.562674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.421 [2024-11-19 10:34:53.597949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:15.362 [2024-11-19 10:34:54.326291] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:07:15.362 [2024-11-19 10:34:54.326343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784225 ] 00:07:15.362 [2024-11-19 10:34:54.411930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.362 [2024-11-19 10:34:54.447730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.362 [2024-11-19 10:34:54.447778] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:15.362 [2024-11-19 10:34:54.447788] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:15.362 [2024-11-19 10:34:54.447795] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 784022 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 784022 ']' 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 784022 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784022 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784022' 00:07:15.362 killing process with pid 784022 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 784022 00:07:15.362 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 784022 00:07:15.622 00:07:15.622 real 0m1.319s 00:07:15.622 user 0m1.547s 00:07:15.622 sys 0m0.378s 00:07:15.622 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.622 10:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:15.622 ************************************ 00:07:15.622 END TEST exit_on_failed_rpc_init 00:07:15.622 ************************************ 00:07:15.622 10:34:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:15.622 00:07:15.622 real 0m13.742s 00:07:15.622 user 0m13.321s 00:07:15.622 sys 0m1.563s 00:07:15.622 10:34:54 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.622 10:34:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.622 ************************************ 00:07:15.622 END TEST skip_rpc 00:07:15.622 ************************************ 00:07:15.622 10:34:54 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:15.622 10:34:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.622 10:34:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.622 10:34:54 -- common/autotest_common.sh@10 -- # set +x 00:07:15.883 ************************************ 00:07:15.884 START TEST rpc_client 00:07:15.884 ************************************ 00:07:15.884 10:34:54 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:15.884 * Looking for test storage... 00:07:15.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:07:15.884 10:34:54 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:15.884 10:34:54 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:07:15.884 10:34:54 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:15.884 10:34:55 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.884 10:34:55 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:15.884 10:34:55 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.884 10:34:55 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:15.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.884 --rc genhtml_branch_coverage=1 00:07:15.884 --rc genhtml_function_coverage=1 00:07:15.884 --rc genhtml_legend=1 00:07:15.884 --rc geninfo_all_blocks=1 00:07:15.884 --rc geninfo_unexecuted_blocks=1 00:07:15.884 00:07:15.884 ' 00:07:15.884 10:34:55 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:15.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.884 --rc genhtml_branch_coverage=1 00:07:15.884 --rc genhtml_function_coverage=1 00:07:15.884 --rc genhtml_legend=1 00:07:15.884 --rc geninfo_all_blocks=1 00:07:15.884 --rc geninfo_unexecuted_blocks=1 00:07:15.884 00:07:15.884 ' 00:07:15.884 10:34:55 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:15.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.884 --rc genhtml_branch_coverage=1 00:07:15.884 --rc genhtml_function_coverage=1 00:07:15.884 --rc genhtml_legend=1 00:07:15.884 --rc geninfo_all_blocks=1 00:07:15.884 --rc geninfo_unexecuted_blocks=1 00:07:15.884 00:07:15.884 ' 00:07:15.884 10:34:55 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:15.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.884 --rc genhtml_branch_coverage=1 00:07:15.884 --rc genhtml_function_coverage=1 00:07:15.884 --rc genhtml_legend=1 00:07:15.884 --rc geninfo_all_blocks=1 00:07:15.884 --rc geninfo_unexecuted_blocks=1 00:07:15.884 00:07:15.884 ' 00:07:15.884 10:34:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:15.884 OK 00:07:16.146 10:34:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:16.146 00:07:16.146 real 0m0.225s 00:07:16.146 user 0m0.133s 00:07:16.146 sys 0m0.105s 00:07:16.146 10:34:55 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.146 10:34:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:16.146 ************************************ 00:07:16.146 END TEST rpc_client 00:07:16.146 ************************************ 00:07:16.146 10:34:55 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:16.146 10:34:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.146 10:34:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.146 10:34:55 -- common/autotest_common.sh@10 -- # set +x 00:07:16.146 ************************************ 00:07:16.146 START TEST json_config 00:07:16.146 ************************************ 00:07:16.146 10:34:55 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:16.146 10:34:55 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:16.146 10:34:55 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:07:16.146 10:34:55 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:16.146 10:34:55 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:16.146 10:34:55 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.146 10:34:55 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.146 10:34:55 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.146 10:34:55 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.146 10:34:55 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.146 10:34:55 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.146 10:34:55 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.146 10:34:55 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.146 10:34:55 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.146 10:34:55 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.146 10:34:55 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.146 10:34:55 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:16.146 10:34:55 json_config -- scripts/common.sh@345 -- # : 1 00:07:16.146 10:34:55 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.146 10:34:55 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.146 10:34:55 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:16.146 10:34:55 json_config -- scripts/common.sh@353 -- # local d=1 00:07:16.146 10:34:55 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.146 10:34:55 json_config -- scripts/common.sh@355 -- # echo 1 00:07:16.146 10:34:55 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.146 10:34:55 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:16.146 10:34:55 json_config -- scripts/common.sh@353 -- # local d=2 00:07:16.146 10:34:55 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.146 10:34:55 json_config -- scripts/common.sh@355 -- # echo 2 00:07:16.146 10:34:55 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.146 10:34:55 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.146 10:34:55 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.146 10:34:55 json_config -- scripts/common.sh@368 -- # return 0 00:07:16.146 10:34:55 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.146 10:34:55 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:16.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.146 --rc genhtml_branch_coverage=1 00:07:16.146 --rc genhtml_function_coverage=1 00:07:16.146 --rc genhtml_legend=1 00:07:16.146 --rc geninfo_all_blocks=1 00:07:16.146 --rc geninfo_unexecuted_blocks=1 00:07:16.146 00:07:16.146 ' 00:07:16.146 10:34:55 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:16.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.146 --rc genhtml_branch_coverage=1 00:07:16.146 --rc genhtml_function_coverage=1 00:07:16.146 --rc genhtml_legend=1 00:07:16.146 --rc geninfo_all_blocks=1 00:07:16.146 --rc geninfo_unexecuted_blocks=1 00:07:16.146 00:07:16.146 ' 00:07:16.146 10:34:55 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:16.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.146 --rc genhtml_branch_coverage=1 00:07:16.146 --rc genhtml_function_coverage=1 00:07:16.146 --rc genhtml_legend=1 00:07:16.146 --rc geninfo_all_blocks=1 00:07:16.146 --rc geninfo_unexecuted_blocks=1 00:07:16.146 00:07:16.146 ' 00:07:16.146 10:34:55 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:16.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.146 --rc genhtml_branch_coverage=1 00:07:16.146 --rc genhtml_function_coverage=1 00:07:16.146 --rc genhtml_legend=1 00:07:16.146 --rc geninfo_all_blocks=1 00:07:16.146 --rc geninfo_unexecuted_blocks=1 00:07:16.146 00:07:16.146 ' 00:07:16.146 10:34:55 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:16.146 10:34:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:16.146 10:34:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.146 10:34:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.146 10:34:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.146 10:34:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.146 10:34:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.146 10:34:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.146 10:34:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.146 10:34:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.146 10:34:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.146 10:34:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.408 10:34:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:16.408 10:34:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:16.408 10:34:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.408 10:34:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.408 10:34:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:16.408 10:34:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.408 10:34:55 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:16.408 10:34:55 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:16.408 10:34:55 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.408 10:34:55 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.408 10:34:55 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.408 10:34:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.408 10:34:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.408 10:34:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.408 10:34:55 json_config -- paths/export.sh@5 -- # export PATH 00:07:16.408 10:34:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.408 10:34:55 json_config -- nvmf/common.sh@51 -- # : 0 00:07:16.408 10:34:55 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:16.408 10:34:55 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:16.408 10:34:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.408 10:34:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.408 10:34:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.408 10:34:55 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:16.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:16.408 10:34:55 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:16.408 10:34:55 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:16.408 10:34:55 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:07:16.408 INFO: JSON configuration test init 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:07:16.408 10:34:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:16.408 10:34:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:07:16.408 10:34:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:16.408 10:34:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:16.408 10:34:55 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:07:16.408 10:34:55 json_config -- json_config/common.sh@9 -- # local app=target 00:07:16.408 10:34:55 json_config -- json_config/common.sh@10 -- # shift 00:07:16.408 10:34:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:16.408 10:34:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:16.408 10:34:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:16.408 10:34:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:16.408 10:34:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:16.408 10:34:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=784497 00:07:16.408 10:34:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:16.408 Waiting for target to run... 00:07:16.408 10:34:55 json_config -- json_config/common.sh@25 -- # waitforlisten 784497 /var/tmp/spdk_tgt.sock 00:07:16.408 10:34:55 json_config -- common/autotest_common.sh@835 -- # '[' -z 784497 ']' 00:07:16.408 10:34:55 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:16.408 10:34:55 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.408 10:34:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:16.408 10:34:55 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:16.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:16.409 10:34:55 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.409 10:34:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:16.409 [2024-11-19 10:34:55.433187] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:07:16.409 [2024-11-19 10:34:55.433237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784497 ] 00:07:16.669 [2024-11-19 10:34:55.784109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.669 [2024-11-19 10:34:55.817863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.240 10:34:56 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.240 10:34:56 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:17.240 10:34:56 json_config -- json_config/common.sh@26 -- # echo '' 00:07:17.240 00:07:17.240 10:34:56 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:07:17.240 10:34:56 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:07:17.240 10:34:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:17.240 10:34:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:17.240 10:34:56 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:07:17.240 10:34:56 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:07:17.240 10:34:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:17.240 10:34:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:17.240 10:34:56 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:17.240 10:34:56 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:07:17.240 10:34:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:17.809 10:34:56 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:17.809 10:34:56 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:17.809 10:34:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:17.809 10:34:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:17.809 10:34:56 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:17.809 10:34:56 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:17.809 10:34:56 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:17.810 10:34:56 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:17.810 10:34:56 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:17.810 10:34:56 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:17.810 10:34:56 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:17.810 10:34:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@54 -- # sort 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:18.070 10:34:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:18.070 10:34:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:18.070 10:34:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:18.070 10:34:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:18.070 10:34:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:18.070 MallocForNvmf0 00:07:18.070 10:34:57 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:18.070 10:34:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:18.330 MallocForNvmf1 00:07:18.330 10:34:57 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:18.330 10:34:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:18.590 [2024-11-19 10:34:57.593289] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.590 10:34:57 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:18.590 10:34:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:18.850 10:34:57 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:18.850 10:34:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:18.850 10:34:57 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:18.850 10:34:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:19.111 10:34:58 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:19.111 10:34:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:19.371 [2024-11-19 10:34:58.319475] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:19.372 10:34:58 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:19.372 10:34:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:19.372 10:34:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.372 10:34:58 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:19.372 10:34:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:19.372 10:34:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.372 10:34:58 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:19.372 10:34:58 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:19.372 10:34:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:19.632 MallocBdevForConfigChangeCheck 00:07:19.632 10:34:58 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:19.632 10:34:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:19.632 10:34:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.632 10:34:58 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:19.632 10:34:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:19.893 10:34:58 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:19.893 INFO: shutting down applications... 00:07:19.893 10:34:58 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:19.893 10:34:58 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:19.893 10:34:58 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:19.893 10:34:58 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:20.466 Calling clear_iscsi_subsystem 00:07:20.466 Calling clear_nvmf_subsystem 00:07:20.466 Calling clear_nbd_subsystem 00:07:20.466 Calling clear_ublk_subsystem 00:07:20.466 Calling clear_vhost_blk_subsystem 00:07:20.466 Calling clear_vhost_scsi_subsystem 00:07:20.466 Calling clear_bdev_subsystem 00:07:20.466 10:34:59 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:07:20.466 10:34:59 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:20.466 10:34:59 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:20.466 10:34:59 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:20.466 10:34:59 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:20.466 10:34:59 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:20.727 10:34:59 json_config -- json_config/json_config.sh@352 -- # break 00:07:20.727 10:34:59 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:20.727 10:34:59 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:20.727 10:34:59 json_config -- json_config/common.sh@31 -- # local app=target 00:07:20.727 10:34:59 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:20.727 10:34:59 json_config -- json_config/common.sh@35 -- # [[ -n 784497 ]] 00:07:20.727 10:34:59 json_config -- json_config/common.sh@38 -- # kill -SIGINT 784497 00:07:20.727 10:34:59 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:20.727 10:34:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:20.727 10:34:59 json_config -- json_config/common.sh@41 -- # kill -0 784497 00:07:20.727 10:34:59 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:21.299 10:35:00 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:21.299 10:35:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:21.299 10:35:00 json_config -- json_config/common.sh@41 -- # kill -0 784497 00:07:21.299 10:35:00 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:21.299 10:35:00 json_config -- json_config/common.sh@43 -- # break 00:07:21.299 10:35:00 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:21.299 10:35:00 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:21.299 SPDK target shutdown done 00:07:21.299 10:35:00 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:21.299 INFO: relaunching applications... 00:07:21.299 10:35:00 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:21.299 10:35:00 json_config -- json_config/common.sh@9 -- # local app=target 00:07:21.299 10:35:00 json_config -- json_config/common.sh@10 -- # shift 00:07:21.299 10:35:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:21.299 10:35:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:21.299 10:35:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:21.299 10:35:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:21.299 10:35:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:21.299 10:35:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=785634 00:07:21.299 10:35:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:21.299 Waiting for target to run... 00:07:21.299 10:35:00 json_config -- json_config/common.sh@25 -- # waitforlisten 785634 /var/tmp/spdk_tgt.sock 00:07:21.299 10:35:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:21.299 10:35:00 json_config -- common/autotest_common.sh@835 -- # '[' -z 785634 ']' 00:07:21.299 10:35:00 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:21.299 10:35:00 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.299 10:35:00 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:21.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:21.299 10:35:00 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.299 10:35:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:21.299 [2024-11-19 10:35:00.311116] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:07:21.299 [2024-11-19 10:35:00.311182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785634 ] 00:07:21.560 [2024-11-19 10:35:00.641769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.560 [2024-11-19 10:35:00.667236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.133 [2024-11-19 10:35:01.171579] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.133 [2024-11-19 10:35:01.203924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:22.133 10:35:01 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.133 10:35:01 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:22.133 10:35:01 json_config -- json_config/common.sh@26 -- # echo '' 00:07:22.133 00:07:22.133 10:35:01 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:22.133 10:35:01 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:22.133 INFO: Checking if target configuration is the same... 00:07:22.133 10:35:01 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:22.133 10:35:01 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:22.133 10:35:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:22.133 + '[' 2 -ne 2 ']' 00:07:22.133 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:22.133 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:22.133 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:22.133 +++ basename /dev/fd/62 00:07:22.133 ++ mktemp /tmp/62.XXX 00:07:22.133 + tmp_file_1=/tmp/62.sl7 00:07:22.133 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:22.133 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:22.133 + tmp_file_2=/tmp/spdk_tgt_config.json.mKB 00:07:22.133 + ret=0 00:07:22.133 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:22.394 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:22.655 + diff -u /tmp/62.sl7 /tmp/spdk_tgt_config.json.mKB 00:07:22.655 + echo 'INFO: JSON config files are the same' 00:07:22.655 INFO: JSON config files are the same 00:07:22.655 + rm /tmp/62.sl7 /tmp/spdk_tgt_config.json.mKB 00:07:22.655 + exit 0 00:07:22.655 10:35:01 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:22.655 10:35:01 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:22.655 INFO: changing configuration and checking if this can be detected... 00:07:22.655 10:35:01 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:22.655 10:35:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:22.655 10:35:01 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:22.655 10:35:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:22.655 10:35:01 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:22.655 + '[' 2 -ne 2 ']' 00:07:22.655 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:22.655 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:22.655 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:22.655 +++ basename /dev/fd/62 00:07:22.655 ++ mktemp /tmp/62.XXX 00:07:22.655 + tmp_file_1=/tmp/62.2fE 00:07:22.655 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:22.655 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:22.655 + tmp_file_2=/tmp/spdk_tgt_config.json.k9P 00:07:22.655 + ret=0 00:07:22.655 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:23.227 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:23.227 + diff -u /tmp/62.2fE /tmp/spdk_tgt_config.json.k9P 00:07:23.227 + ret=1 00:07:23.227 + echo '=== Start of file: /tmp/62.2fE ===' 00:07:23.227 + cat /tmp/62.2fE 00:07:23.227 + echo '=== End of file: /tmp/62.2fE ===' 00:07:23.227 + echo '' 00:07:23.227 + echo '=== Start of file: /tmp/spdk_tgt_config.json.k9P ===' 00:07:23.227 + cat /tmp/spdk_tgt_config.json.k9P 00:07:23.227 + echo '=== End of file: /tmp/spdk_tgt_config.json.k9P ===' 00:07:23.227 + echo '' 00:07:23.227 + rm /tmp/62.2fE /tmp/spdk_tgt_config.json.k9P 00:07:23.227 + exit 1 00:07:23.227 10:35:02 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:23.227 INFO: configuration change detected. 00:07:23.227 10:35:02 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:23.227 10:35:02 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:23.227 10:35:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:23.227 10:35:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:23.227 10:35:02 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:23.227 10:35:02 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:23.227 10:35:02 json_config -- json_config/json_config.sh@324 -- # [[ -n 785634 ]] 00:07:23.227 10:35:02 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:23.227 10:35:02 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:23.227 10:35:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:23.227 10:35:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:23.227 10:35:02 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:23.227 10:35:02 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:23.227 10:35:02 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:23.227 10:35:02 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:23.227 10:35:02 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:23.227 10:35:02 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:23.227 10:35:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:23.227 10:35:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:23.227 10:35:02 json_config -- json_config/json_config.sh@330 -- # killprocess 785634 00:07:23.227 10:35:02 json_config -- common/autotest_common.sh@954 -- # '[' -z 785634 ']' 00:07:23.227 10:35:02 json_config -- common/autotest_common.sh@958 -- # kill -0 785634 00:07:23.227 10:35:02 json_config -- common/autotest_common.sh@959 -- # uname 00:07:23.227 10:35:02 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.227 10:35:02 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785634 00:07:23.227 10:35:02 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.227 10:35:02 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.227 10:35:02 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785634' 00:07:23.227 killing process with pid 785634 00:07:23.227 10:35:02 json_config -- common/autotest_common.sh@973 -- # kill 785634 00:07:23.227 10:35:02 json_config -- common/autotest_common.sh@978 -- # wait 785634 00:07:23.489 10:35:02 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:23.489 10:35:02 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:23.489 10:35:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:23.489 10:35:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:23.489 10:35:02 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:23.489 10:35:02 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:23.489 INFO: Success 00:07:23.489 00:07:23.489 real 0m7.473s 00:07:23.489 user 0m8.995s 00:07:23.489 sys 0m2.040s 00:07:23.489 10:35:02 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.489 10:35:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:23.489 ************************************ 00:07:23.489 END TEST json_config 00:07:23.489 ************************************ 00:07:23.489 10:35:02 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:23.489 10:35:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.489 10:35:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.489 10:35:02 -- common/autotest_common.sh@10 -- # set +x 00:07:23.751 ************************************ 00:07:23.751 START TEST json_config_extra_key 00:07:23.751 ************************************ 00:07:23.751 10:35:02 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:23.751 10:35:02 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:23.751 10:35:02 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:07:23.751 10:35:02 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:23.751 10:35:02 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:23.751 10:35:02 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.751 10:35:02 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:23.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.751 --rc genhtml_branch_coverage=1 00:07:23.751 --rc genhtml_function_coverage=1 00:07:23.751 --rc genhtml_legend=1 00:07:23.751 --rc geninfo_all_blocks=1 00:07:23.751 --rc geninfo_unexecuted_blocks=1 00:07:23.751 00:07:23.751 ' 00:07:23.751 10:35:02 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:23.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.751 --rc genhtml_branch_coverage=1 00:07:23.751 --rc genhtml_function_coverage=1 00:07:23.751 --rc genhtml_legend=1 00:07:23.751 --rc geninfo_all_blocks=1 00:07:23.751 --rc geninfo_unexecuted_blocks=1 00:07:23.751 00:07:23.751 ' 00:07:23.751 10:35:02 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:23.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.751 --rc genhtml_branch_coverage=1 00:07:23.751 --rc genhtml_function_coverage=1 00:07:23.751 --rc genhtml_legend=1 00:07:23.751 --rc geninfo_all_blocks=1 00:07:23.751 --rc geninfo_unexecuted_blocks=1 00:07:23.751 00:07:23.751 ' 00:07:23.751 10:35:02 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:23.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.751 --rc genhtml_branch_coverage=1 00:07:23.751 --rc genhtml_function_coverage=1 00:07:23.751 --rc genhtml_legend=1 00:07:23.751 --rc geninfo_all_blocks=1 00:07:23.751 --rc geninfo_unexecuted_blocks=1 00:07:23.751 00:07:23.751 ' 00:07:23.751 10:35:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.751 10:35:02 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.751 10:35:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.751 10:35:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.751 10:35:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.751 10:35:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:23.751 10:35:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:23.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:23.751 10:35:02 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:23.751 10:35:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:23.751 10:35:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:23.751 10:35:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:23.751 10:35:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:23.751 10:35:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:23.751 10:35:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:23.751 10:35:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:23.751 10:35:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:23.752 10:35:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:23.752 10:35:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:23.752 10:35:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:23.752 INFO: launching applications... 00:07:23.752 10:35:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:23.752 10:35:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:23.752 10:35:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:23.752 10:35:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:23.752 10:35:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:23.752 10:35:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:23.752 10:35:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:23.752 10:35:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:23.752 10:35:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=786392 00:07:23.752 10:35:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:23.752 Waiting for target to run... 00:07:23.752 10:35:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 786392 /var/tmp/spdk_tgt.sock 00:07:23.752 10:35:02 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 786392 ']' 00:07:23.752 10:35:02 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:23.752 10:35:02 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.752 10:35:02 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:23.752 10:35:02 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:23.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:23.752 10:35:02 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.752 10:35:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:24.013 [2024-11-19 10:35:02.981079] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:07:24.013 [2024-11-19 10:35:02.981152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786392 ] 00:07:24.273 [2024-11-19 10:35:03.289931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.273 [2024-11-19 10:35:03.315949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.845 10:35:03 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.845 10:35:03 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:24.845 10:35:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:24.845 00:07:24.845 10:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:24.845 INFO: shutting down applications... 00:07:24.845 10:35:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:24.845 10:35:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:24.845 10:35:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:24.845 10:35:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 786392 ]] 00:07:24.845 10:35:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 786392 00:07:24.845 10:35:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:24.845 10:35:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:24.845 10:35:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 786392 00:07:24.845 10:35:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:25.106 10:35:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:25.106 10:35:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:25.106 10:35:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 786392 00:07:25.106 10:35:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:25.106 10:35:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:25.106 10:35:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:25.106 10:35:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:25.106 SPDK target shutdown done 00:07:25.106 10:35:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:25.106 Success 00:07:25.106 00:07:25.106 real 0m1.576s 00:07:25.106 user 0m1.176s 00:07:25.106 sys 0m0.438s 00:07:25.106 10:35:04 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.106 10:35:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:25.106 ************************************ 00:07:25.106 END TEST json_config_extra_key 00:07:25.106 ************************************ 00:07:25.368 10:35:04 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:25.368 10:35:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.368 10:35:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.368 10:35:04 -- common/autotest_common.sh@10 -- # set +x 00:07:25.368 ************************************ 00:07:25.368 START TEST alias_rpc 00:07:25.368 ************************************ 00:07:25.368 10:35:04 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:25.368 * Looking for test storage... 00:07:25.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:07:25.368 10:35:04 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:25.368 10:35:04 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:25.368 10:35:04 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:25.368 10:35:04 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.368 10:35:04 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:25.630 10:35:04 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.630 10:35:04 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.630 10:35:04 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.630 10:35:04 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:25.630 10:35:04 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.630 10:35:04 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:25.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.630 --rc genhtml_branch_coverage=1 00:07:25.630 --rc genhtml_function_coverage=1 00:07:25.630 --rc genhtml_legend=1 00:07:25.630 --rc geninfo_all_blocks=1 00:07:25.630 --rc geninfo_unexecuted_blocks=1 00:07:25.630 00:07:25.630 ' 00:07:25.630 10:35:04 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:25.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.630 --rc genhtml_branch_coverage=1 00:07:25.630 --rc genhtml_function_coverage=1 00:07:25.630 --rc genhtml_legend=1 00:07:25.630 --rc geninfo_all_blocks=1 00:07:25.630 --rc geninfo_unexecuted_blocks=1 00:07:25.630 00:07:25.630 ' 00:07:25.630 10:35:04 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:25.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.630 --rc genhtml_branch_coverage=1 00:07:25.630 --rc genhtml_function_coverage=1 00:07:25.630 --rc genhtml_legend=1 00:07:25.630 --rc geninfo_all_blocks=1 00:07:25.630 --rc geninfo_unexecuted_blocks=1 00:07:25.630 00:07:25.630 ' 00:07:25.630 10:35:04 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:25.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.630 --rc genhtml_branch_coverage=1 00:07:25.630 --rc genhtml_function_coverage=1 00:07:25.630 --rc genhtml_legend=1 00:07:25.630 --rc geninfo_all_blocks=1 00:07:25.630 --rc geninfo_unexecuted_blocks=1 00:07:25.630 00:07:25.630 ' 00:07:25.630 10:35:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:25.630 10:35:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=786750 00:07:25.630 10:35:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 786750 00:07:25.630 10:35:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:25.630 10:35:04 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 786750 ']' 00:07:25.630 10:35:04 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.630 10:35:04 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.630 10:35:04 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.630 10:35:04 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.630 10:35:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.630 [2024-11-19 10:35:04.628676] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:07:25.630 [2024-11-19 10:35:04.628750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786750 ] 00:07:25.630 [2024-11-19 10:35:04.715878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.630 [2024-11-19 10:35:04.756449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.570 10:35:05 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.570 10:35:05 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:26.570 10:35:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:26.570 10:35:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 786750 00:07:26.570 10:35:05 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 786750 ']' 00:07:26.570 10:35:05 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 786750 00:07:26.570 10:35:05 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:26.570 10:35:05 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.570 10:35:05 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786750 00:07:26.570 10:35:05 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.570 10:35:05 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.570 10:35:05 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786750' 00:07:26.570 killing process with pid 786750 00:07:26.570 10:35:05 alias_rpc -- common/autotest_common.sh@973 -- # kill 786750 00:07:26.570 10:35:05 alias_rpc -- common/autotest_common.sh@978 -- # wait 786750 00:07:26.830 00:07:26.830 real 0m1.523s 00:07:26.830 user 0m1.662s 00:07:26.830 sys 0m0.455s 00:07:26.830 10:35:05 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.830 10:35:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.830 ************************************ 00:07:26.830 END TEST alias_rpc 00:07:26.830 ************************************ 00:07:26.830 10:35:05 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:26.830 10:35:05 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:26.830 10:35:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.830 10:35:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.830 10:35:05 -- common/autotest_common.sh@10 -- # set +x 00:07:26.830 ************************************ 00:07:26.830 START TEST spdkcli_tcp 00:07:26.830 ************************************ 00:07:26.830 10:35:05 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:27.091 * Looking for test storage... 00:07:27.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:27.091 10:35:06 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.091 10:35:06 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.091 10:35:06 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.091 10:35:06 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.091 10:35:06 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:27.091 10:35:06 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.091 10:35:06 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.091 --rc genhtml_branch_coverage=1 00:07:27.091 --rc genhtml_function_coverage=1 00:07:27.091 --rc genhtml_legend=1 00:07:27.091 --rc geninfo_all_blocks=1 00:07:27.091 --rc geninfo_unexecuted_blocks=1 00:07:27.091 00:07:27.091 ' 00:07:27.091 10:35:06 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.091 --rc genhtml_branch_coverage=1 00:07:27.091 --rc genhtml_function_coverage=1 00:07:27.091 --rc genhtml_legend=1 00:07:27.091 --rc geninfo_all_blocks=1 00:07:27.091 --rc geninfo_unexecuted_blocks=1 00:07:27.091 00:07:27.091 ' 00:07:27.091 10:35:06 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.091 --rc genhtml_branch_coverage=1 00:07:27.091 --rc genhtml_function_coverage=1 00:07:27.091 --rc genhtml_legend=1 00:07:27.091 --rc geninfo_all_blocks=1 00:07:27.091 --rc geninfo_unexecuted_blocks=1 00:07:27.091 00:07:27.091 ' 00:07:27.091 10:35:06 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.091 --rc genhtml_branch_coverage=1 00:07:27.091 --rc genhtml_function_coverage=1 00:07:27.091 --rc genhtml_legend=1 00:07:27.091 --rc geninfo_all_blocks=1 00:07:27.091 --rc geninfo_unexecuted_blocks=1 00:07:27.091 00:07:27.091 ' 00:07:27.091 10:35:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:27.091 10:35:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:27.091 10:35:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:27.091 10:35:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:27.091 10:35:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:27.091 10:35:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:27.091 10:35:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:27.091 10:35:06 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:27.091 10:35:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:27.091 10:35:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=787098 00:07:27.091 10:35:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 787098 00:07:27.091 10:35:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:27.091 10:35:06 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 787098 ']' 00:07:27.091 10:35:06 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.091 10:35:06 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.091 10:35:06 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.091 10:35:06 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.091 10:35:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:27.091 [2024-11-19 10:35:06.231896] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:07:27.091 [2024-11-19 10:35:06.231961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787098 ] 00:07:27.352 [2024-11-19 10:35:06.321758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:27.352 [2024-11-19 10:35:06.364816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.352 [2024-11-19 10:35:06.364817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.923 10:35:07 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.923 10:35:07 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:27.923 10:35:07 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:27.923 10:35:07 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=787226 00:07:27.923 10:35:07 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:28.183 [ 00:07:28.183 "bdev_malloc_delete", 00:07:28.183 "bdev_malloc_create", 00:07:28.183 "bdev_null_resize", 00:07:28.183 "bdev_null_delete", 00:07:28.183 "bdev_null_create", 00:07:28.183 "bdev_nvme_cuse_unregister", 00:07:28.183 "bdev_nvme_cuse_register", 00:07:28.183 "bdev_opal_new_user", 00:07:28.183 "bdev_opal_set_lock_state", 00:07:28.183 "bdev_opal_delete", 00:07:28.183 "bdev_opal_get_info", 00:07:28.183 "bdev_opal_create", 00:07:28.184 "bdev_nvme_opal_revert", 00:07:28.184 "bdev_nvme_opal_init", 00:07:28.184 "bdev_nvme_send_cmd", 00:07:28.184 "bdev_nvme_set_keys", 00:07:28.184 "bdev_nvme_get_path_iostat", 00:07:28.184 "bdev_nvme_get_mdns_discovery_info", 00:07:28.184 "bdev_nvme_stop_mdns_discovery", 00:07:28.184 "bdev_nvme_start_mdns_discovery", 00:07:28.184 "bdev_nvme_set_multipath_policy", 00:07:28.184 "bdev_nvme_set_preferred_path", 00:07:28.184 "bdev_nvme_get_io_paths", 00:07:28.184 "bdev_nvme_remove_error_injection", 00:07:28.184 "bdev_nvme_add_error_injection", 00:07:28.184 "bdev_nvme_get_discovery_info", 00:07:28.184 "bdev_nvme_stop_discovery", 00:07:28.184 "bdev_nvme_start_discovery", 00:07:28.184 "bdev_nvme_get_controller_health_info", 00:07:28.184 "bdev_nvme_disable_controller", 00:07:28.184 "bdev_nvme_enable_controller", 00:07:28.184 "bdev_nvme_reset_controller", 00:07:28.184 "bdev_nvme_get_transport_statistics", 00:07:28.184 "bdev_nvme_apply_firmware", 00:07:28.184 "bdev_nvme_detach_controller", 00:07:28.184 "bdev_nvme_get_controllers", 00:07:28.184 "bdev_nvme_attach_controller", 00:07:28.184 "bdev_nvme_set_hotplug", 00:07:28.184 "bdev_nvme_set_options", 00:07:28.184 "bdev_passthru_delete", 00:07:28.184 "bdev_passthru_create", 00:07:28.184 "bdev_lvol_set_parent_bdev", 00:07:28.184 "bdev_lvol_set_parent", 00:07:28.184 "bdev_lvol_check_shallow_copy", 00:07:28.184 "bdev_lvol_start_shallow_copy", 00:07:28.184 "bdev_lvol_grow_lvstore", 00:07:28.184 "bdev_lvol_get_lvols", 00:07:28.184 "bdev_lvol_get_lvstores", 00:07:28.184 "bdev_lvol_delete", 00:07:28.184 "bdev_lvol_set_read_only", 00:07:28.184 "bdev_lvol_resize", 00:07:28.184 "bdev_lvol_decouple_parent", 00:07:28.184 "bdev_lvol_inflate", 00:07:28.184 "bdev_lvol_rename", 00:07:28.184 "bdev_lvol_clone_bdev", 00:07:28.184 "bdev_lvol_clone", 00:07:28.184 "bdev_lvol_snapshot", 00:07:28.184 "bdev_lvol_create", 00:07:28.184 "bdev_lvol_delete_lvstore", 00:07:28.184 "bdev_lvol_rename_lvstore", 00:07:28.184 "bdev_lvol_create_lvstore", 00:07:28.184 "bdev_raid_set_options", 00:07:28.184 "bdev_raid_remove_base_bdev", 00:07:28.184 "bdev_raid_add_base_bdev", 00:07:28.184 "bdev_raid_delete", 00:07:28.184 "bdev_raid_create", 00:07:28.184 "bdev_raid_get_bdevs", 00:07:28.184 "bdev_error_inject_error", 00:07:28.184 "bdev_error_delete", 00:07:28.184 "bdev_error_create", 00:07:28.184 "bdev_split_delete", 00:07:28.184 "bdev_split_create", 00:07:28.184 "bdev_delay_delete", 00:07:28.184 "bdev_delay_create", 00:07:28.184 "bdev_delay_update_latency", 00:07:28.184 "bdev_zone_block_delete", 00:07:28.184 "bdev_zone_block_create", 00:07:28.184 "blobfs_create", 00:07:28.184 "blobfs_detect", 00:07:28.184 "blobfs_set_cache_size", 00:07:28.184 "bdev_aio_delete", 00:07:28.184 "bdev_aio_rescan", 00:07:28.184 "bdev_aio_create", 00:07:28.184 "bdev_ftl_set_property", 00:07:28.184 "bdev_ftl_get_properties", 00:07:28.184 "bdev_ftl_get_stats", 00:07:28.184 "bdev_ftl_unmap", 00:07:28.184 "bdev_ftl_unload", 00:07:28.184 "bdev_ftl_delete", 00:07:28.184 "bdev_ftl_load", 00:07:28.184 "bdev_ftl_create", 00:07:28.184 "bdev_virtio_attach_controller", 00:07:28.184 "bdev_virtio_scsi_get_devices", 00:07:28.184 "bdev_virtio_detach_controller", 00:07:28.184 "bdev_virtio_blk_set_hotplug", 00:07:28.184 "bdev_iscsi_delete", 00:07:28.184 "bdev_iscsi_create", 00:07:28.184 "bdev_iscsi_set_options", 00:07:28.184 "accel_error_inject_error", 00:07:28.184 "ioat_scan_accel_module", 00:07:28.184 "dsa_scan_accel_module", 00:07:28.184 "iaa_scan_accel_module", 00:07:28.184 "vfu_virtio_create_fs_endpoint", 00:07:28.184 "vfu_virtio_create_scsi_endpoint", 00:07:28.184 "vfu_virtio_scsi_remove_target", 00:07:28.184 "vfu_virtio_scsi_add_target", 00:07:28.184 "vfu_virtio_create_blk_endpoint", 00:07:28.184 "vfu_virtio_delete_endpoint", 00:07:28.184 "keyring_file_remove_key", 00:07:28.184 "keyring_file_add_key", 00:07:28.184 "keyring_linux_set_options", 00:07:28.184 "fsdev_aio_delete", 00:07:28.184 "fsdev_aio_create", 00:07:28.184 "iscsi_get_histogram", 00:07:28.184 "iscsi_enable_histogram", 00:07:28.184 "iscsi_set_options", 00:07:28.184 "iscsi_get_auth_groups", 00:07:28.184 "iscsi_auth_group_remove_secret", 00:07:28.184 "iscsi_auth_group_add_secret", 00:07:28.184 "iscsi_delete_auth_group", 00:07:28.184 "iscsi_create_auth_group", 00:07:28.184 "iscsi_set_discovery_auth", 00:07:28.184 "iscsi_get_options", 00:07:28.184 "iscsi_target_node_request_logout", 00:07:28.184 "iscsi_target_node_set_redirect", 00:07:28.184 "iscsi_target_node_set_auth", 00:07:28.184 "iscsi_target_node_add_lun", 00:07:28.184 "iscsi_get_stats", 00:07:28.184 "iscsi_get_connections", 00:07:28.184 "iscsi_portal_group_set_auth", 00:07:28.184 "iscsi_start_portal_group", 00:07:28.184 "iscsi_delete_portal_group", 00:07:28.184 "iscsi_create_portal_group", 00:07:28.184 "iscsi_get_portal_groups", 00:07:28.184 "iscsi_delete_target_node", 00:07:28.184 "iscsi_target_node_remove_pg_ig_maps", 00:07:28.184 "iscsi_target_node_add_pg_ig_maps", 00:07:28.184 "iscsi_create_target_node", 00:07:28.184 "iscsi_get_target_nodes", 00:07:28.184 "iscsi_delete_initiator_group", 00:07:28.184 "iscsi_initiator_group_remove_initiators", 00:07:28.184 "iscsi_initiator_group_add_initiators", 00:07:28.184 "iscsi_create_initiator_group", 00:07:28.184 "iscsi_get_initiator_groups", 00:07:28.184 "nvmf_set_crdt", 00:07:28.184 "nvmf_set_config", 00:07:28.184 "nvmf_set_max_subsystems", 00:07:28.184 "nvmf_stop_mdns_prr", 00:07:28.184 "nvmf_publish_mdns_prr", 00:07:28.184 "nvmf_subsystem_get_listeners", 00:07:28.184 "nvmf_subsystem_get_qpairs", 00:07:28.184 "nvmf_subsystem_get_controllers", 00:07:28.184 "nvmf_get_stats", 00:07:28.184 "nvmf_get_transports", 00:07:28.184 "nvmf_create_transport", 00:07:28.184 "nvmf_get_targets", 00:07:28.184 "nvmf_delete_target", 00:07:28.184 "nvmf_create_target", 00:07:28.184 "nvmf_subsystem_allow_any_host", 00:07:28.184 "nvmf_subsystem_set_keys", 00:07:28.184 "nvmf_subsystem_remove_host", 00:07:28.184 "nvmf_subsystem_add_host", 00:07:28.184 "nvmf_ns_remove_host", 00:07:28.184 "nvmf_ns_add_host", 00:07:28.184 "nvmf_subsystem_remove_ns", 00:07:28.184 "nvmf_subsystem_set_ns_ana_group", 00:07:28.184 "nvmf_subsystem_add_ns", 00:07:28.184 "nvmf_subsystem_listener_set_ana_state", 00:07:28.184 "nvmf_discovery_get_referrals", 00:07:28.184 "nvmf_discovery_remove_referral", 00:07:28.184 "nvmf_discovery_add_referral", 00:07:28.184 "nvmf_subsystem_remove_listener", 00:07:28.184 "nvmf_subsystem_add_listener", 00:07:28.184 "nvmf_delete_subsystem", 00:07:28.184 "nvmf_create_subsystem", 00:07:28.184 "nvmf_get_subsystems", 00:07:28.184 "env_dpdk_get_mem_stats", 00:07:28.184 "nbd_get_disks", 00:07:28.184 "nbd_stop_disk", 00:07:28.184 "nbd_start_disk", 00:07:28.185 "ublk_recover_disk", 00:07:28.185 "ublk_get_disks", 00:07:28.185 "ublk_stop_disk", 00:07:28.185 "ublk_start_disk", 00:07:28.185 "ublk_destroy_target", 00:07:28.185 "ublk_create_target", 00:07:28.185 "virtio_blk_create_transport", 00:07:28.185 "virtio_blk_get_transports", 00:07:28.185 "vhost_controller_set_coalescing", 00:07:28.185 "vhost_get_controllers", 00:07:28.185 "vhost_delete_controller", 00:07:28.185 "vhost_create_blk_controller", 00:07:28.185 "vhost_scsi_controller_remove_target", 00:07:28.185 "vhost_scsi_controller_add_target", 00:07:28.185 "vhost_start_scsi_controller", 00:07:28.185 "vhost_create_scsi_controller", 00:07:28.185 "thread_set_cpumask", 00:07:28.185 "scheduler_set_options", 00:07:28.185 "framework_get_governor", 00:07:28.185 "framework_get_scheduler", 00:07:28.185 "framework_set_scheduler", 00:07:28.185 "framework_get_reactors", 00:07:28.185 "thread_get_io_channels", 00:07:28.185 "thread_get_pollers", 00:07:28.185 "thread_get_stats", 00:07:28.185 "framework_monitor_context_switch", 00:07:28.185 "spdk_kill_instance", 00:07:28.185 "log_enable_timestamps", 00:07:28.185 "log_get_flags", 00:07:28.185 "log_clear_flag", 00:07:28.185 "log_set_flag", 00:07:28.185 "log_get_level", 00:07:28.185 "log_set_level", 00:07:28.185 "log_get_print_level", 00:07:28.185 "log_set_print_level", 00:07:28.185 "framework_enable_cpumask_locks", 00:07:28.185 "framework_disable_cpumask_locks", 00:07:28.185 "framework_wait_init", 00:07:28.185 "framework_start_init", 00:07:28.185 "scsi_get_devices", 00:07:28.185 "bdev_get_histogram", 00:07:28.185 "bdev_enable_histogram", 00:07:28.185 "bdev_set_qos_limit", 00:07:28.185 "bdev_set_qd_sampling_period", 00:07:28.185 "bdev_get_bdevs", 00:07:28.185 "bdev_reset_iostat", 00:07:28.185 "bdev_get_iostat", 00:07:28.185 "bdev_examine", 00:07:28.185 "bdev_wait_for_examine", 00:07:28.185 "bdev_set_options", 00:07:28.185 "accel_get_stats", 00:07:28.185 "accel_set_options", 00:07:28.185 "accel_set_driver", 00:07:28.185 "accel_crypto_key_destroy", 00:07:28.185 "accel_crypto_keys_get", 00:07:28.185 "accel_crypto_key_create", 00:07:28.185 "accel_assign_opc", 00:07:28.185 "accel_get_module_info", 00:07:28.185 "accel_get_opc_assignments", 00:07:28.185 "vmd_rescan", 00:07:28.185 "vmd_remove_device", 00:07:28.185 "vmd_enable", 00:07:28.185 "sock_get_default_impl", 00:07:28.185 "sock_set_default_impl", 00:07:28.185 "sock_impl_set_options", 00:07:28.185 "sock_impl_get_options", 00:07:28.185 "iobuf_get_stats", 00:07:28.185 "iobuf_set_options", 00:07:28.185 "keyring_get_keys", 00:07:28.185 "vfu_tgt_set_base_path", 00:07:28.185 "framework_get_pci_devices", 00:07:28.185 "framework_get_config", 00:07:28.185 "framework_get_subsystems", 00:07:28.185 "fsdev_set_opts", 00:07:28.185 "fsdev_get_opts", 00:07:28.185 "trace_get_info", 00:07:28.185 "trace_get_tpoint_group_mask", 00:07:28.185 "trace_disable_tpoint_group", 00:07:28.185 "trace_enable_tpoint_group", 00:07:28.185 "trace_clear_tpoint_mask", 00:07:28.185 "trace_set_tpoint_mask", 00:07:28.185 "notify_get_notifications", 00:07:28.185 "notify_get_types", 00:07:28.185 "spdk_get_version", 00:07:28.185 "rpc_get_methods" 00:07:28.185 ] 00:07:28.185 10:35:07 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:28.185 10:35:07 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:28.185 10:35:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:28.185 10:35:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:28.185 10:35:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 787098 00:07:28.185 10:35:07 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 787098 ']' 00:07:28.185 10:35:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 787098 00:07:28.185 10:35:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:28.185 10:35:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.185 10:35:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 787098 00:07:28.185 10:35:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.185 10:35:07 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.185 10:35:07 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 787098' 00:07:28.185 killing process with pid 787098 00:07:28.185 10:35:07 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 787098 00:07:28.185 10:35:07 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 787098 00:07:28.446 00:07:28.446 real 0m1.553s 00:07:28.446 user 0m2.823s 00:07:28.446 sys 0m0.487s 00:07:28.446 10:35:07 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.446 10:35:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:28.446 ************************************ 00:07:28.446 END TEST spdkcli_tcp 00:07:28.446 ************************************ 00:07:28.446 10:35:07 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:28.446 10:35:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.446 10:35:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.446 10:35:07 -- common/autotest_common.sh@10 -- # set +x 00:07:28.446 ************************************ 00:07:28.446 START TEST dpdk_mem_utility 00:07:28.446 ************************************ 00:07:28.446 10:35:07 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:28.708 * Looking for test storage... 00:07:28.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:28.708 10:35:07 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:28.708 10:35:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:07:28.708 10:35:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:28.708 10:35:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.708 10:35:07 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:28.708 10:35:07 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.708 10:35:07 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:28.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.708 --rc genhtml_branch_coverage=1 00:07:28.708 --rc genhtml_function_coverage=1 00:07:28.708 --rc genhtml_legend=1 00:07:28.708 --rc geninfo_all_blocks=1 00:07:28.708 --rc geninfo_unexecuted_blocks=1 00:07:28.708 00:07:28.708 ' 00:07:28.708 10:35:07 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:28.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.708 --rc genhtml_branch_coverage=1 00:07:28.708 --rc genhtml_function_coverage=1 00:07:28.708 --rc genhtml_legend=1 00:07:28.708 --rc geninfo_all_blocks=1 00:07:28.708 --rc geninfo_unexecuted_blocks=1 00:07:28.708 00:07:28.708 ' 00:07:28.708 10:35:07 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:28.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.708 --rc genhtml_branch_coverage=1 00:07:28.708 --rc genhtml_function_coverage=1 00:07:28.708 --rc genhtml_legend=1 00:07:28.708 --rc geninfo_all_blocks=1 00:07:28.708 --rc geninfo_unexecuted_blocks=1 00:07:28.708 00:07:28.708 ' 00:07:28.708 10:35:07 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:28.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.708 --rc genhtml_branch_coverage=1 00:07:28.708 --rc genhtml_function_coverage=1 00:07:28.708 --rc genhtml_legend=1 00:07:28.708 --rc geninfo_all_blocks=1 00:07:28.708 --rc geninfo_unexecuted_blocks=1 00:07:28.708 00:07:28.708 ' 00:07:28.708 10:35:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:28.708 10:35:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=787473 00:07:28.708 10:35:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 787473 00:07:28.708 10:35:07 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 787473 ']' 00:07:28.708 10:35:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:28.708 10:35:07 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.708 10:35:07 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.708 10:35:07 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.708 10:35:07 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.708 10:35:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:28.708 [2024-11-19 10:35:07.852749] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:07:28.708 [2024-11-19 10:35:07.852820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787473 ] 00:07:28.969 [2024-11-19 10:35:07.939785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.969 [2024-11-19 10:35:07.974679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.541 10:35:08 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.541 10:35:08 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:29.541 10:35:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:29.541 10:35:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:29.541 10:35:08 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.541 10:35:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:29.541 { 00:07:29.541 "filename": "/tmp/spdk_mem_dump.txt" 00:07:29.541 } 00:07:29.541 10:35:08 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.541 10:35:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:29.541 DPDK memory size 810.000000 MiB in 1 heap(s) 00:07:29.541 1 heaps totaling size 810.000000 MiB 00:07:29.541 size: 810.000000 MiB heap id: 0 00:07:29.541 end heaps---------- 00:07:29.541 9 mempools totaling size 595.772034 MiB 00:07:29.541 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:29.541 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:29.541 size: 92.545471 MiB name: bdev_io_787473 00:07:29.541 size: 50.003479 MiB name: msgpool_787473 00:07:29.541 size: 36.509338 MiB name: fsdev_io_787473 00:07:29.541 size: 21.763794 MiB name: PDU_Pool 00:07:29.541 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:29.541 size: 4.133484 MiB name: evtpool_787473 00:07:29.541 size: 0.026123 MiB name: Session_Pool 00:07:29.541 end mempools------- 00:07:29.541 6 memzones totaling size 4.142822 MiB 00:07:29.541 size: 1.000366 MiB name: RG_ring_0_787473 00:07:29.541 size: 1.000366 MiB name: RG_ring_1_787473 00:07:29.541 size: 1.000366 MiB name: RG_ring_4_787473 00:07:29.541 size: 1.000366 MiB name: RG_ring_5_787473 00:07:29.541 size: 0.125366 MiB name: RG_ring_2_787473 00:07:29.541 size: 0.015991 MiB name: RG_ring_3_787473 00:07:29.541 end memzones------- 00:07:29.541 10:35:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:29.803 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:07:29.803 list of free elements. size: 10.862488 MiB 00:07:29.803 element at address: 0x200018a00000 with size: 0.999878 MiB 00:07:29.803 element at address: 0x200018c00000 with size: 0.999878 MiB 00:07:29.803 element at address: 0x200000400000 with size: 0.998535 MiB 00:07:29.803 element at address: 0x200031800000 with size: 0.994446 MiB 00:07:29.803 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:29.803 element at address: 0x200012c00000 with size: 0.954285 MiB 00:07:29.803 element at address: 0x200018e00000 with size: 0.936584 MiB 00:07:29.803 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:29.803 element at address: 0x20001a600000 with size: 0.582886 MiB 00:07:29.803 element at address: 0x200000c00000 with size: 0.495422 MiB 00:07:29.803 element at address: 0x20000a600000 with size: 0.490723 MiB 00:07:29.803 element at address: 0x200019000000 with size: 0.485657 MiB 00:07:29.803 element at address: 0x200003e00000 with size: 0.481934 MiB 00:07:29.803 element at address: 0x200027a00000 with size: 0.410034 MiB 00:07:29.803 element at address: 0x200000800000 with size: 0.355042 MiB 00:07:29.803 list of standard malloc elements. size: 199.218628 MiB 00:07:29.803 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:29.803 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:29.803 element at address: 0x200018afff80 with size: 1.000122 MiB 00:07:29.803 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:07:29.803 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:29.803 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:29.803 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:07:29.803 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:29.803 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:07:29.803 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:29.803 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:29.803 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:29.803 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:29.803 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:07:29.803 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:29.803 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:29.803 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:07:29.803 element at address: 0x20000085b040 with size: 0.000183 MiB 00:07:29.803 element at address: 0x20000085f300 with size: 0.000183 MiB 00:07:29.803 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:29.803 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:29.803 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:29.803 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:29.803 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:29.803 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:29.803 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:29.803 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:29.803 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:29.803 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:29.803 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:29.803 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:29.803 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:29.803 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:29.803 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:07:29.803 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:07:29.803 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:07:29.803 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:07:29.803 element at address: 0x20001a695380 with size: 0.000183 MiB 00:07:29.803 element at address: 0x20001a695440 with size: 0.000183 MiB 00:07:29.803 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:07:29.803 element at address: 0x200027a69040 with size: 0.000183 MiB 00:07:29.803 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:07:29.804 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:07:29.804 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:07:29.804 list of memzone associated elements. size: 599.918884 MiB 00:07:29.804 element at address: 0x20001a695500 with size: 211.416748 MiB 00:07:29.804 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:29.804 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:07:29.804 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:29.804 element at address: 0x200012df4780 with size: 92.045044 MiB 00:07:29.804 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_787473_0 00:07:29.804 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:29.804 associated memzone info: size: 48.002930 MiB name: MP_msgpool_787473_0 00:07:29.804 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:29.804 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_787473_0 00:07:29.804 element at address: 0x2000191be940 with size: 20.255554 MiB 00:07:29.804 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:29.804 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:07:29.804 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:29.804 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:29.804 associated memzone info: size: 3.000122 MiB name: MP_evtpool_787473_0 00:07:29.804 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:29.804 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_787473 00:07:29.804 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:29.804 associated memzone info: size: 1.007996 MiB name: MP_evtpool_787473 00:07:29.804 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:29.804 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:29.804 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:07:29.804 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:29.804 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:29.804 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:29.804 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:29.804 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:29.804 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:29.804 associated memzone info: size: 1.000366 MiB name: RG_ring_0_787473 00:07:29.804 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:29.804 associated memzone info: size: 1.000366 MiB name: RG_ring_1_787473 00:07:29.804 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:07:29.804 associated memzone info: size: 1.000366 MiB name: RG_ring_4_787473 00:07:29.804 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:07:29.804 associated memzone info: size: 1.000366 MiB name: RG_ring_5_787473 00:07:29.804 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:29.804 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_787473 00:07:29.804 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:29.804 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_787473 00:07:29.804 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:29.804 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:29.804 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:29.804 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:29.804 element at address: 0x20001907c540 with size: 0.250488 MiB 00:07:29.804 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:29.804 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:29.804 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_787473 00:07:29.804 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:07:29.804 associated memzone info: size: 0.125366 MiB name: RG_ring_2_787473 00:07:29.804 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:29.804 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:29.804 element at address: 0x200027a69100 with size: 0.023743 MiB 00:07:29.804 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:29.804 element at address: 0x20000085b100 with size: 0.016113 MiB 00:07:29.804 associated memzone info: size: 0.015991 MiB name: RG_ring_3_787473 00:07:29.804 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:07:29.804 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:29.804 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:07:29.804 associated memzone info: size: 0.000183 MiB name: MP_msgpool_787473 00:07:29.804 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:29.804 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_787473 00:07:29.804 element at address: 0x20000085af00 with size: 0.000305 MiB 00:07:29.804 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_787473 00:07:29.804 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:07:29.804 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:29.804 10:35:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:29.804 10:35:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 787473 00:07:29.804 10:35:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 787473 ']' 00:07:29.804 10:35:08 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 787473 00:07:29.804 10:35:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:29.804 10:35:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.804 10:35:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 787473 00:07:29.804 10:35:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.804 10:35:08 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.804 10:35:08 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 787473' 00:07:29.804 killing process with pid 787473 00:07:29.804 10:35:08 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 787473 00:07:29.804 10:35:08 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 787473 00:07:30.065 00:07:30.065 real 0m1.408s 00:07:30.065 user 0m1.477s 00:07:30.065 sys 0m0.428s 00:07:30.065 10:35:08 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.065 10:35:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:30.065 ************************************ 00:07:30.065 END TEST dpdk_mem_utility 00:07:30.065 ************************************ 00:07:30.065 10:35:09 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:30.065 10:35:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.065 10:35:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.065 10:35:09 -- common/autotest_common.sh@10 -- # set +x 00:07:30.065 ************************************ 00:07:30.065 START TEST event 00:07:30.065 ************************************ 00:07:30.065 10:35:09 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:30.065 * Looking for test storage... 00:07:30.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:30.065 10:35:09 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.065 10:35:09 event -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.065 10:35:09 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.065 10:35:09 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.065 10:35:09 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.065 10:35:09 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.065 10:35:09 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.065 10:35:09 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.065 10:35:09 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.065 10:35:09 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.065 10:35:09 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.065 10:35:09 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.065 10:35:09 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.065 10:35:09 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.065 10:35:09 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.065 10:35:09 event -- scripts/common.sh@344 -- # case "$op" in 00:07:30.065 10:35:09 event -- scripts/common.sh@345 -- # : 1 00:07:30.065 10:35:09 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.065 10:35:09 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.325 10:35:09 event -- scripts/common.sh@365 -- # decimal 1 00:07:30.326 10:35:09 event -- scripts/common.sh@353 -- # local d=1 00:07:30.326 10:35:09 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.326 10:35:09 event -- scripts/common.sh@355 -- # echo 1 00:07:30.326 10:35:09 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.326 10:35:09 event -- scripts/common.sh@366 -- # decimal 2 00:07:30.326 10:35:09 event -- scripts/common.sh@353 -- # local d=2 00:07:30.326 10:35:09 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.326 10:35:09 event -- scripts/common.sh@355 -- # echo 2 00:07:30.326 10:35:09 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.326 10:35:09 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.326 10:35:09 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.326 10:35:09 event -- scripts/common.sh@368 -- # return 0 00:07:30.326 10:35:09 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.326 10:35:09 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.326 --rc genhtml_branch_coverage=1 00:07:30.326 --rc genhtml_function_coverage=1 00:07:30.326 --rc genhtml_legend=1 00:07:30.326 --rc geninfo_all_blocks=1 00:07:30.326 --rc geninfo_unexecuted_blocks=1 00:07:30.326 00:07:30.326 ' 00:07:30.326 10:35:09 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.326 --rc genhtml_branch_coverage=1 00:07:30.326 --rc genhtml_function_coverage=1 00:07:30.326 --rc genhtml_legend=1 00:07:30.326 --rc geninfo_all_blocks=1 00:07:30.326 --rc geninfo_unexecuted_blocks=1 00:07:30.326 00:07:30.326 ' 00:07:30.326 10:35:09 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.326 --rc genhtml_branch_coverage=1 00:07:30.326 --rc genhtml_function_coverage=1 00:07:30.326 --rc genhtml_legend=1 00:07:30.326 --rc geninfo_all_blocks=1 00:07:30.326 --rc geninfo_unexecuted_blocks=1 00:07:30.326 00:07:30.326 ' 00:07:30.326 10:35:09 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.326 --rc genhtml_branch_coverage=1 00:07:30.326 --rc genhtml_function_coverage=1 00:07:30.326 --rc genhtml_legend=1 00:07:30.326 --rc geninfo_all_blocks=1 00:07:30.326 --rc geninfo_unexecuted_blocks=1 00:07:30.326 00:07:30.326 ' 00:07:30.326 10:35:09 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:30.326 10:35:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:30.326 10:35:09 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:30.326 10:35:09 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:30.326 10:35:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.326 10:35:09 event -- common/autotest_common.sh@10 -- # set +x 00:07:30.326 ************************************ 00:07:30.326 START TEST event_perf 00:07:30.326 ************************************ 00:07:30.326 10:35:09 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:30.326 Running I/O for 1 seconds...[2024-11-19 10:35:09.339756] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:07:30.326 [2024-11-19 10:35:09.339853] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787776 ] 00:07:30.326 [2024-11-19 10:35:09.431897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.326 [2024-11-19 10:35:09.475573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.326 [2024-11-19 10:35:09.475727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.326 [2024-11-19 10:35:09.475881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.326 Running I/O for 1 seconds...[2024-11-19 10:35:09.475883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.707 00:07:31.707 lcore 0: 176558 00:07:31.707 lcore 1: 176561 00:07:31.707 lcore 2: 176561 00:07:31.707 lcore 3: 176563 00:07:31.707 done. 00:07:31.707 00:07:31.707 real 0m1.186s 00:07:31.707 user 0m4.092s 00:07:31.707 sys 0m0.090s 00:07:31.707 10:35:10 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.707 10:35:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:31.707 ************************************ 00:07:31.707 END TEST event_perf 00:07:31.707 ************************************ 00:07:31.707 10:35:10 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:31.707 10:35:10 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:31.707 10:35:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.707 10:35:10 event -- common/autotest_common.sh@10 -- # set +x 00:07:31.707 ************************************ 00:07:31.707 START TEST event_reactor 00:07:31.707 ************************************ 00:07:31.707 10:35:10 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:31.707 [2024-11-19 10:35:10.601686] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:07:31.707 [2024-11-19 10:35:10.601791] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788068 ] 00:07:31.707 [2024-11-19 10:35:10.687938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.707 [2024-11-19 10:35:10.727314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.647 test_start 00:07:32.647 oneshot 00:07:32.647 tick 100 00:07:32.647 tick 100 00:07:32.647 tick 250 00:07:32.647 tick 100 00:07:32.647 tick 100 00:07:32.647 tick 250 00:07:32.647 tick 100 00:07:32.647 tick 500 00:07:32.647 tick 100 00:07:32.647 tick 100 00:07:32.647 tick 250 00:07:32.647 tick 100 00:07:32.648 tick 100 00:07:32.648 test_end 00:07:32.648 00:07:32.648 real 0m1.173s 00:07:32.648 user 0m1.089s 00:07:32.648 sys 0m0.079s 00:07:32.648 10:35:11 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.648 10:35:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:32.648 ************************************ 00:07:32.648 END TEST event_reactor 00:07:32.648 ************************************ 00:07:32.648 10:35:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:32.648 10:35:11 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:32.648 10:35:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.648 10:35:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:32.648 ************************************ 00:07:32.648 START TEST event_reactor_perf 00:07:32.648 ************************************ 00:07:32.648 10:35:11 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:32.907 [2024-11-19 10:35:11.854256] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:07:32.907 [2024-11-19 10:35:11.854351] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788419 ] 00:07:32.907 [2024-11-19 10:35:11.942393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.907 [2024-11-19 10:35:11.973418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.848 test_start 00:07:33.848 test_end 00:07:33.848 Performance: 541638 events per second 00:07:33.849 00:07:33.849 real 0m1.167s 00:07:33.849 user 0m1.089s 00:07:33.849 sys 0m0.075s 00:07:33.849 10:35:12 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.849 10:35:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:33.849 ************************************ 00:07:33.849 END TEST event_reactor_perf 00:07:33.849 ************************************ 00:07:33.849 10:35:13 event -- event/event.sh@49 -- # uname -s 00:07:34.110 10:35:13 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:34.110 10:35:13 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:34.110 10:35:13 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.110 10:35:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.110 10:35:13 event -- common/autotest_common.sh@10 -- # set +x 00:07:34.110 ************************************ 00:07:34.110 START TEST event_scheduler 00:07:34.110 ************************************ 00:07:34.110 10:35:13 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:34.110 * Looking for test storage... 00:07:34.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:34.111 10:35:13 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:34.111 10:35:13 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:07:34.111 10:35:13 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:34.111 10:35:13 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.111 10:35:13 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:34.111 10:35:13 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.111 10:35:13 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:34.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.111 --rc genhtml_branch_coverage=1 00:07:34.111 --rc genhtml_function_coverage=1 00:07:34.111 --rc genhtml_legend=1 00:07:34.111 --rc geninfo_all_blocks=1 00:07:34.111 --rc geninfo_unexecuted_blocks=1 00:07:34.111 00:07:34.111 ' 00:07:34.111 10:35:13 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:34.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.111 --rc genhtml_branch_coverage=1 00:07:34.111 --rc genhtml_function_coverage=1 00:07:34.111 --rc genhtml_legend=1 00:07:34.111 --rc geninfo_all_blocks=1 00:07:34.111 --rc geninfo_unexecuted_blocks=1 00:07:34.111 00:07:34.111 ' 00:07:34.111 10:35:13 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:34.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.111 --rc genhtml_branch_coverage=1 00:07:34.111 --rc genhtml_function_coverage=1 00:07:34.111 --rc genhtml_legend=1 00:07:34.111 --rc geninfo_all_blocks=1 00:07:34.111 --rc geninfo_unexecuted_blocks=1 00:07:34.111 00:07:34.111 ' 00:07:34.111 10:35:13 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:34.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.111 --rc genhtml_branch_coverage=1 00:07:34.111 --rc genhtml_function_coverage=1 00:07:34.111 --rc genhtml_legend=1 00:07:34.111 --rc geninfo_all_blocks=1 00:07:34.111 --rc geninfo_unexecuted_blocks=1 00:07:34.111 00:07:34.111 ' 00:07:34.111 10:35:13 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:34.111 10:35:13 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=788807 00:07:34.111 10:35:13 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:34.111 10:35:13 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:34.111 10:35:13 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 788807 00:07:34.111 10:35:13 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 788807 ']' 00:07:34.111 10:35:13 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.111 10:35:13 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.111 10:35:13 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.111 10:35:13 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.111 10:35:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:34.372 [2024-11-19 10:35:13.336801] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:07:34.372 [2024-11-19 10:35:13.336853] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788807 ] 00:07:34.372 [2024-11-19 10:35:13.425337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.372 [2024-11-19 10:35:13.464186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.372 [2024-11-19 10:35:13.464302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.372 [2024-11-19 10:35:13.464422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.372 [2024-11-19 10:35:13.464423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.317 10:35:14 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.317 10:35:14 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:35.317 10:35:14 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:35.317 10:35:14 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.317 10:35:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:35.317 [2024-11-19 10:35:14.142727] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:35.318 [2024-11-19 10:35:14.142746] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:35.318 [2024-11-19 10:35:14.142756] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:35.318 [2024-11-19 10:35:14.142762] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:35.318 [2024-11-19 10:35:14.142767] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:35.318 10:35:14 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.318 10:35:14 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:35.318 10:35:14 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.318 10:35:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:35.318 [2024-11-19 10:35:14.205504] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:35.318 10:35:14 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.318 10:35:14 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:35.318 10:35:14 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.318 10:35:14 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.318 10:35:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:35.318 ************************************ 00:07:35.318 START TEST scheduler_create_thread 00:07:35.318 ************************************ 00:07:35.318 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:35.318 10:35:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:35.318 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.318 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.318 2 00:07:35.318 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.318 10:35:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:35.318 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.318 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.318 3 00:07:35.318 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.318 10:35:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:35.318 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.318 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.318 4 00:07:35.318 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.318 10:35:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:35.318 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.318 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.318 5 00:07:35.318 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.318 10:35:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:35.319 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.319 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.319 6 00:07:35.319 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.319 10:35:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:35.319 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.319 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.319 7 00:07:35.319 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.319 10:35:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:35.319 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.319 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.319 8 00:07:35.319 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.319 10:35:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:35.319 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.319 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.319 9 00:07:35.319 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.319 10:35:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:35.319 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.319 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.893 10 00:07:35.893 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.893 10:35:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:35.893 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.893 10:35:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.278 10:35:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.278 10:35:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:37.278 10:35:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:37.278 10:35:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.278 10:35:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.850 10:35:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.850 10:35:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:37.850 10:35:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.850 10:35:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.792 10:35:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.792 10:35:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:38.792 10:35:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:38.792 10:35:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.792 10:35:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:39.364 10:35:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.364 00:07:39.364 real 0m4.225s 00:07:39.364 user 0m0.023s 00:07:39.364 sys 0m0.009s 00:07:39.364 10:35:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.364 10:35:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:39.364 ************************************ 00:07:39.364 END TEST scheduler_create_thread 00:07:39.364 ************************************ 00:07:39.364 10:35:18 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:39.364 10:35:18 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 788807 00:07:39.364 10:35:18 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 788807 ']' 00:07:39.364 10:35:18 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 788807 00:07:39.364 10:35:18 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:39.364 10:35:18 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.364 10:35:18 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 788807 00:07:39.624 10:35:18 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:39.624 10:35:18 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:39.624 10:35:18 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 788807' 00:07:39.624 killing process with pid 788807 00:07:39.624 10:35:18 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 788807 00:07:39.624 10:35:18 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 788807 00:07:39.624 [2024-11-19 10:35:18.751220] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:39.885 00:07:39.885 real 0m5.822s 00:07:39.885 user 0m12.905s 00:07:39.885 sys 0m0.409s 00:07:39.885 10:35:18 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.885 10:35:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:39.885 ************************************ 00:07:39.885 END TEST event_scheduler 00:07:39.885 ************************************ 00:07:39.885 10:35:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:39.885 10:35:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:39.885 10:35:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.885 10:35:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.885 10:35:18 event -- common/autotest_common.sh@10 -- # set +x 00:07:39.885 ************************************ 00:07:39.885 START TEST app_repeat 00:07:39.885 ************************************ 00:07:39.885 10:35:18 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:39.885 10:35:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.885 10:35:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.885 10:35:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:39.885 10:35:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:39.885 10:35:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:39.885 10:35:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:39.885 10:35:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:39.885 10:35:19 event.app_repeat -- event/event.sh@19 -- # repeat_pid=789873 00:07:39.885 10:35:19 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:39.885 10:35:19 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:39.885 10:35:19 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 789873' 00:07:39.885 Process app_repeat pid: 789873 00:07:39.885 10:35:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:39.885 10:35:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:39.885 spdk_app_start Round 0 00:07:39.885 10:35:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 789873 /var/tmp/spdk-nbd.sock 00:07:39.885 10:35:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 789873 ']' 00:07:39.885 10:35:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:39.885 10:35:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.885 10:35:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:39.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:39.885 10:35:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.885 10:35:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:39.885 [2024-11-19 10:35:19.033507] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:07:39.885 [2024-11-19 10:35:19.033573] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789873 ] 00:07:40.146 [2024-11-19 10:35:19.120937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:40.146 [2024-11-19 10:35:19.153204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.146 [2024-11-19 10:35:19.153212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.146 10:35:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.146 10:35:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:40.146 10:35:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:40.406 Malloc0 00:07:40.406 10:35:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:40.406 Malloc1 00:07:40.407 10:35:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:40.407 10:35:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.407 10:35:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:40.407 10:35:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:40.407 10:35:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.407 10:35:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:40.407 10:35:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:40.407 10:35:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.407 10:35:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:40.407 10:35:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:40.407 10:35:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.407 10:35:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:40.407 10:35:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:40.407 10:35:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:40.407 10:35:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:40.407 10:35:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:40.668 /dev/nbd0 00:07:40.668 10:35:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:40.668 10:35:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:40.668 10:35:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:40.668 10:35:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:40.668 10:35:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.668 10:35:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.668 10:35:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:40.668 10:35:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:40.668 10:35:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.668 10:35:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.668 10:35:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:40.668 1+0 records in 00:07:40.668 1+0 records out 00:07:40.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219281 s, 18.7 MB/s 00:07:40.668 10:35:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:40.668 10:35:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:40.668 10:35:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:40.668 10:35:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.668 10:35:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:40.668 10:35:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.668 10:35:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:40.668 10:35:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:40.929 /dev/nbd1 00:07:40.929 10:35:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:40.929 10:35:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:40.929 10:35:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:40.929 10:35:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:40.929 10:35:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.929 10:35:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.929 10:35:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:40.929 10:35:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:40.929 10:35:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.929 10:35:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.929 10:35:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:40.929 1+0 records in 00:07:40.929 1+0 records out 00:07:40.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275163 s, 14.9 MB/s 00:07:40.929 10:35:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:40.929 10:35:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:40.929 10:35:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:40.929 10:35:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.929 10:35:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:40.929 10:35:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.929 10:35:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:40.929 10:35:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:40.929 10:35:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.929 10:35:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:41.190 { 00:07:41.190 "nbd_device": "/dev/nbd0", 00:07:41.190 "bdev_name": "Malloc0" 00:07:41.190 }, 00:07:41.190 { 00:07:41.190 "nbd_device": "/dev/nbd1", 00:07:41.190 "bdev_name": "Malloc1" 00:07:41.190 } 00:07:41.190 ]' 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:41.190 { 00:07:41.190 "nbd_device": "/dev/nbd0", 00:07:41.190 "bdev_name": "Malloc0" 00:07:41.190 }, 00:07:41.190 { 00:07:41.190 "nbd_device": "/dev/nbd1", 00:07:41.190 "bdev_name": "Malloc1" 00:07:41.190 } 00:07:41.190 ]' 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:41.190 /dev/nbd1' 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:41.190 /dev/nbd1' 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:41.190 256+0 records in 00:07:41.190 256+0 records out 00:07:41.190 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116411 s, 90.1 MB/s 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:41.190 256+0 records in 00:07:41.190 256+0 records out 00:07:41.190 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118944 s, 88.2 MB/s 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:41.190 256+0 records in 00:07:41.190 256+0 records out 00:07:41.190 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133196 s, 78.7 MB/s 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:41.190 10:35:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.191 10:35:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:41.191 10:35:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:41.191 10:35:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:41.191 10:35:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.191 10:35:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.191 10:35:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:41.191 10:35:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:41.191 10:35:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.191 10:35:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:41.451 10:35:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:41.451 10:35:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:41.451 10:35:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:41.451 10:35:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.451 10:35:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.451 10:35:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:41.451 10:35:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:41.451 10:35:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.451 10:35:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.451 10:35:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:41.712 10:35:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:41.712 10:35:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:41.712 10:35:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:41.712 10:35:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.712 10:35:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.712 10:35:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:41.712 10:35:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:41.712 10:35:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.712 10:35:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:41.712 10:35:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.712 10:35:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:41.973 10:35:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:41.973 10:35:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:41.973 10:35:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:41.973 10:35:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:41.973 10:35:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:41.973 10:35:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:41.973 10:35:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:41.973 10:35:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:41.973 10:35:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:41.973 10:35:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:41.973 10:35:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:41.973 10:35:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:41.973 10:35:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:42.233 10:35:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:42.233 [2024-11-19 10:35:21.246447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:42.233 [2024-11-19 10:35:21.276779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.233 [2024-11-19 10:35:21.276779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.233 [2024-11-19 10:35:21.305870] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:42.233 [2024-11-19 10:35:21.305899] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:45.567 10:35:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:45.567 10:35:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:45.567 spdk_app_start Round 1 00:07:45.567 10:35:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 789873 /var/tmp/spdk-nbd.sock 00:07:45.567 10:35:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 789873 ']' 00:07:45.567 10:35:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:45.567 10:35:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.567 10:35:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:45.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:45.567 10:35:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.567 10:35:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:45.567 10:35:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.567 10:35:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:45.567 10:35:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:45.567 Malloc0 00:07:45.567 10:35:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:45.567 Malloc1 00:07:45.567 10:35:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:45.567 10:35:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.567 10:35:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:45.567 10:35:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:45.567 10:35:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.567 10:35:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:45.568 10:35:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:45.568 10:35:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.568 10:35:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:45.568 10:35:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:45.568 10:35:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.568 10:35:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:45.568 10:35:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:45.568 10:35:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:45.568 10:35:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:45.568 10:35:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:45.875 /dev/nbd0 00:07:45.875 10:35:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:45.875 10:35:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:45.875 10:35:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:45.875 10:35:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:45.875 10:35:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.875 10:35:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.875 10:35:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:45.875 10:35:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:45.875 10:35:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.875 10:35:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.875 10:35:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:45.875 1+0 records in 00:07:45.875 1+0 records out 00:07:45.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273696 s, 15.0 MB/s 00:07:45.875 10:35:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:45.875 10:35:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:45.875 10:35:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:45.875 10:35:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.875 10:35:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:45.875 10:35:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.875 10:35:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:45.875 10:35:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:46.162 /dev/nbd1 00:07:46.162 10:35:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:46.162 10:35:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:46.162 10:35:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:46.162 10:35:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:46.162 10:35:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:46.162 10:35:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:46.162 10:35:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:46.162 10:35:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:46.162 10:35:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:46.162 10:35:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:46.162 10:35:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:46.162 1+0 records in 00:07:46.162 1+0 records out 00:07:46.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290813 s, 14.1 MB/s 00:07:46.162 10:35:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:46.162 10:35:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:46.162 10:35:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:46.162 10:35:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:46.162 10:35:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:46.162 10:35:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:46.162 10:35:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:46.162 10:35:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:46.162 10:35:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.162 10:35:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:46.467 { 00:07:46.467 "nbd_device": "/dev/nbd0", 00:07:46.467 "bdev_name": "Malloc0" 00:07:46.467 }, 00:07:46.467 { 00:07:46.467 "nbd_device": "/dev/nbd1", 00:07:46.467 "bdev_name": "Malloc1" 00:07:46.467 } 00:07:46.467 ]' 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:46.467 { 00:07:46.467 "nbd_device": "/dev/nbd0", 00:07:46.467 "bdev_name": "Malloc0" 00:07:46.467 }, 00:07:46.467 { 00:07:46.467 "nbd_device": "/dev/nbd1", 00:07:46.467 "bdev_name": "Malloc1" 00:07:46.467 } 00:07:46.467 ]' 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:46.467 /dev/nbd1' 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:46.467 /dev/nbd1' 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:46.467 256+0 records in 00:07:46.467 256+0 records out 00:07:46.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126939 s, 82.6 MB/s 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:46.467 256+0 records in 00:07:46.467 256+0 records out 00:07:46.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011969 s, 87.6 MB/s 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:46.467 256+0 records in 00:07:46.467 256+0 records out 00:07:46.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128545 s, 81.6 MB/s 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.467 10:35:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.752 10:35:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:47.043 10:35:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:47.043 10:35:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:47.043 10:35:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:47.043 10:35:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:47.043 10:35:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:47.043 10:35:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:47.043 10:35:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:47.043 10:35:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:47.043 10:35:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:47.043 10:35:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:47.043 10:35:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:47.043 10:35:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:47.043 10:35:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:47.322 10:35:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:47.322 [2024-11-19 10:35:26.401418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:47.322 [2024-11-19 10:35:26.432710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.322 [2024-11-19 10:35:26.432712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.322 [2024-11-19 10:35:26.462293] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:47.322 [2024-11-19 10:35:26.462323] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:50.705 10:35:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:50.705 10:35:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:50.705 spdk_app_start Round 2 00:07:50.705 10:35:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 789873 /var/tmp/spdk-nbd.sock 00:07:50.705 10:35:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 789873 ']' 00:07:50.705 10:35:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:50.705 10:35:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.705 10:35:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:50.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:50.705 10:35:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.705 10:35:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:50.705 10:35:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.705 10:35:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:50.705 10:35:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:50.705 Malloc0 00:07:50.705 10:35:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:50.705 Malloc1 00:07:50.705 10:35:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:50.706 10:35:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.706 10:35:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:50.706 10:35:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:50.706 10:35:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:50.706 10:35:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:50.706 10:35:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:50.706 10:35:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.706 10:35:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:50.706 10:35:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:50.706 10:35:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:50.706 10:35:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:50.706 10:35:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:50.706 10:35:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:50.706 10:35:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:50.706 10:35:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:50.966 /dev/nbd0 00:07:50.966 10:35:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:50.966 10:35:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:50.966 10:35:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:50.966 10:35:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:50.966 10:35:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:50.966 10:35:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:50.966 10:35:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:50.966 10:35:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:50.966 10:35:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:50.966 10:35:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:50.966 10:35:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:50.966 1+0 records in 00:07:50.966 1+0 records out 00:07:50.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275451 s, 14.9 MB/s 00:07:50.966 10:35:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:50.966 10:35:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:50.966 10:35:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:50.966 10:35:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:50.966 10:35:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:50.966 10:35:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:50.966 10:35:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:50.966 10:35:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:51.227 /dev/nbd1 00:07:51.227 10:35:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:51.227 10:35:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:51.227 10:35:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:51.227 10:35:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:51.227 10:35:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:51.227 10:35:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:51.227 10:35:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:51.227 10:35:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:51.227 10:35:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:51.227 10:35:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:51.227 10:35:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:51.227 1+0 records in 00:07:51.227 1+0 records out 00:07:51.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271361 s, 15.1 MB/s 00:07:51.227 10:35:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:51.227 10:35:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:51.227 10:35:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:51.227 10:35:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:51.227 10:35:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:51.227 10:35:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.227 10:35:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:51.227 10:35:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:51.227 10:35:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.227 10:35:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:51.487 { 00:07:51.487 "nbd_device": "/dev/nbd0", 00:07:51.487 "bdev_name": "Malloc0" 00:07:51.487 }, 00:07:51.487 { 00:07:51.487 "nbd_device": "/dev/nbd1", 00:07:51.487 "bdev_name": "Malloc1" 00:07:51.487 } 00:07:51.487 ]' 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:51.487 { 00:07:51.487 "nbd_device": "/dev/nbd0", 00:07:51.487 "bdev_name": "Malloc0" 00:07:51.487 }, 00:07:51.487 { 00:07:51.487 "nbd_device": "/dev/nbd1", 00:07:51.487 "bdev_name": "Malloc1" 00:07:51.487 } 00:07:51.487 ]' 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:51.487 /dev/nbd1' 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:51.487 /dev/nbd1' 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:51.487 256+0 records in 00:07:51.487 256+0 records out 00:07:51.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127218 s, 82.4 MB/s 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:51.487 256+0 records in 00:07:51.487 256+0 records out 00:07:51.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120211 s, 87.2 MB/s 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:51.487 256+0 records in 00:07:51.487 256+0 records out 00:07:51.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135102 s, 77.6 MB/s 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.487 10:35:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:51.749 10:35:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:51.749 10:35:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:51.749 10:35:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:51.749 10:35:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.749 10:35:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.749 10:35:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:51.749 10:35:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:51.749 10:35:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.749 10:35:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.749 10:35:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:52.009 10:35:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:52.009 10:35:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:52.009 10:35:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:52.009 10:35:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.009 10:35:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.009 10:35:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:52.009 10:35:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:52.009 10:35:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.009 10:35:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:52.009 10:35:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.009 10:35:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:52.271 10:35:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:52.271 10:35:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:52.271 10:35:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:52.271 10:35:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:52.271 10:35:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:52.271 10:35:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:52.271 10:35:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:52.271 10:35:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:52.271 10:35:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:52.271 10:35:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:52.271 10:35:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:52.271 10:35:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:52.271 10:35:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:52.531 10:35:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:52.531 [2024-11-19 10:35:31.564335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:52.531 [2024-11-19 10:35:31.594192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.531 [2024-11-19 10:35:31.594220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.531 [2024-11-19 10:35:31.623307] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:52.531 [2024-11-19 10:35:31.623336] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:55.831 10:35:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 789873 /var/tmp/spdk-nbd.sock 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 789873 ']' 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:55.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:55.831 10:35:34 event.app_repeat -- event/event.sh@39 -- # killprocess 789873 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 789873 ']' 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 789873 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 789873 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 789873' 00:07:55.831 killing process with pid 789873 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@973 -- # kill 789873 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@978 -- # wait 789873 00:07:55.831 spdk_app_start is called in Round 0. 00:07:55.831 Shutdown signal received, stop current app iteration 00:07:55.831 Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 reinitialization... 00:07:55.831 spdk_app_start is called in Round 1. 00:07:55.831 Shutdown signal received, stop current app iteration 00:07:55.831 Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 reinitialization... 00:07:55.831 spdk_app_start is called in Round 2. 00:07:55.831 Shutdown signal received, stop current app iteration 00:07:55.831 Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 reinitialization... 00:07:55.831 spdk_app_start is called in Round 3. 00:07:55.831 Shutdown signal received, stop current app iteration 00:07:55.831 10:35:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:55.831 10:35:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:55.831 00:07:55.831 real 0m15.830s 00:07:55.831 user 0m34.830s 00:07:55.831 sys 0m2.237s 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.831 10:35:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:55.831 ************************************ 00:07:55.831 END TEST app_repeat 00:07:55.831 ************************************ 00:07:55.831 10:35:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:55.831 10:35:34 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:55.831 10:35:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.831 10:35:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.831 10:35:34 event -- common/autotest_common.sh@10 -- # set +x 00:07:55.831 ************************************ 00:07:55.831 START TEST cpu_locks 00:07:55.831 ************************************ 00:07:55.831 10:35:34 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:55.831 * Looking for test storage... 00:07:55.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:55.831 10:35:35 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:55.831 10:35:35 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:55.831 10:35:35 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:56.092 10:35:35 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.092 10:35:35 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:56.092 10:35:35 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.092 10:35:35 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:56.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.092 --rc genhtml_branch_coverage=1 00:07:56.092 --rc genhtml_function_coverage=1 00:07:56.092 --rc genhtml_legend=1 00:07:56.092 --rc geninfo_all_blocks=1 00:07:56.092 --rc geninfo_unexecuted_blocks=1 00:07:56.092 00:07:56.092 ' 00:07:56.092 10:35:35 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:56.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.092 --rc genhtml_branch_coverage=1 00:07:56.092 --rc genhtml_function_coverage=1 00:07:56.092 --rc genhtml_legend=1 00:07:56.092 --rc geninfo_all_blocks=1 00:07:56.092 --rc geninfo_unexecuted_blocks=1 00:07:56.092 00:07:56.092 ' 00:07:56.092 10:35:35 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:56.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.092 --rc genhtml_branch_coverage=1 00:07:56.092 --rc genhtml_function_coverage=1 00:07:56.092 --rc genhtml_legend=1 00:07:56.092 --rc geninfo_all_blocks=1 00:07:56.092 --rc geninfo_unexecuted_blocks=1 00:07:56.092 00:07:56.092 ' 00:07:56.092 10:35:35 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:56.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.092 --rc genhtml_branch_coverage=1 00:07:56.092 --rc genhtml_function_coverage=1 00:07:56.092 --rc genhtml_legend=1 00:07:56.092 --rc geninfo_all_blocks=1 00:07:56.092 --rc geninfo_unexecuted_blocks=1 00:07:56.092 00:07:56.092 ' 00:07:56.092 10:35:35 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:56.092 10:35:35 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:56.092 10:35:35 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:56.092 10:35:35 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:56.092 10:35:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.092 10:35:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.092 10:35:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:56.092 ************************************ 00:07:56.092 START TEST default_locks 00:07:56.092 ************************************ 00:07:56.092 10:35:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:56.092 10:35:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=793419 00:07:56.092 10:35:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 793419 00:07:56.092 10:35:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:56.092 10:35:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 793419 ']' 00:07:56.092 10:35:35 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.092 10:35:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.092 10:35:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.092 10:35:35 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.092 10:35:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:56.092 [2024-11-19 10:35:35.207265] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:07:56.092 [2024-11-19 10:35:35.207329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793419 ] 00:07:56.352 [2024-11-19 10:35:35.294952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.353 [2024-11-19 10:35:35.334327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.922 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.923 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:56.923 10:35:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 793419 00:07:56.923 10:35:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:56.923 10:35:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 793419 00:07:57.493 lslocks: write error 00:07:57.493 10:35:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 793419 00:07:57.493 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 793419 ']' 00:07:57.493 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 793419 00:07:57.493 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:57.494 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.494 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 793419 00:07:57.494 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:57.494 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:57.494 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 793419' 00:07:57.494 killing process with pid 793419 00:07:57.494 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 793419 00:07:57.494 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 793419 00:07:57.494 10:35:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 793419 00:07:57.494 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:57.494 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 793419 00:07:57.494 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 793419 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 793419 ']' 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:57.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (793419) - No such process 00:07:57.755 ERROR: process (pid: 793419) is no longer running 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:57.755 00:07:57.755 real 0m1.549s 00:07:57.755 user 0m1.665s 00:07:57.755 sys 0m0.567s 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.755 10:35:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:57.755 ************************************ 00:07:57.755 END TEST default_locks 00:07:57.755 ************************************ 00:07:57.755 10:35:36 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:57.755 10:35:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.755 10:35:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.755 10:35:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:57.755 ************************************ 00:07:57.755 START TEST default_locks_via_rpc 00:07:57.755 ************************************ 00:07:57.755 10:35:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:57.755 10:35:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=793747 00:07:57.755 10:35:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 793747 00:07:57.755 10:35:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:57.755 10:35:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 793747 ']' 00:07:57.755 10:35:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.755 10:35:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.755 10:35:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.755 10:35:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.755 10:35:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.755 [2024-11-19 10:35:36.828191] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:07:57.755 [2024-11-19 10:35:36.828242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793747 ] 00:07:57.755 [2024-11-19 10:35:36.909079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.756 [2024-11-19 10:35:36.939497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.697 10:35:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.697 10:35:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:58.697 10:35:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:58.697 10:35:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.697 10:35:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.697 10:35:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.697 10:35:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:58.697 10:35:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:58.697 10:35:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:58.697 10:35:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:58.697 10:35:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:58.697 10:35:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.697 10:35:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.697 10:35:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.697 10:35:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 793747 00:07:58.697 10:35:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 793747 00:07:58.697 10:35:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:58.958 10:35:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 793747 00:07:58.958 10:35:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 793747 ']' 00:07:58.958 10:35:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 793747 00:07:58.958 10:35:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:58.958 10:35:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.958 10:35:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 793747 00:07:58.958 10:35:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.958 10:35:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.958 10:35:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 793747' 00:07:58.958 killing process with pid 793747 00:07:58.958 10:35:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 793747 00:07:58.958 10:35:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 793747 00:07:59.219 00:07:59.219 real 0m1.488s 00:07:59.219 user 0m1.606s 00:07:59.219 sys 0m0.521s 00:07:59.219 10:35:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.219 10:35:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.219 ************************************ 00:07:59.219 END TEST default_locks_via_rpc 00:07:59.219 ************************************ 00:07:59.219 10:35:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:59.219 10:35:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.219 10:35:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.219 10:35:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.219 ************************************ 00:07:59.219 START TEST non_locking_app_on_locked_coremask 00:07:59.219 ************************************ 00:07:59.219 10:35:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:59.219 10:35:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=794050 00:07:59.219 10:35:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 794050 /var/tmp/spdk.sock 00:07:59.219 10:35:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:59.219 10:35:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 794050 ']' 00:07:59.219 10:35:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.219 10:35:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.219 10:35:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.219 10:35:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.219 10:35:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.219 [2024-11-19 10:35:38.407835] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:07:59.219 [2024-11-19 10:35:38.407891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid794050 ] 00:07:59.481 [2024-11-19 10:35:38.492913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.481 [2024-11-19 10:35:38.524315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.054 10:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.054 10:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:00.054 10:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=794227 00:08:00.054 10:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 794227 /var/tmp/spdk2.sock 00:08:00.054 10:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:00.054 10:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 794227 ']' 00:08:00.054 10:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:00.054 10:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.054 10:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:00.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:00.054 10:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.054 10:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.054 [2024-11-19 10:35:39.237509] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:08:00.054 [2024-11-19 10:35:39.237562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid794227 ] 00:08:00.314 [2024-11-19 10:35:39.323049] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:00.314 [2024-11-19 10:35:39.323071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.314 [2024-11-19 10:35:39.381366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.885 10:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.885 10:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:00.885 10:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 794050 00:08:00.885 10:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 794050 00:08:00.885 10:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:01.458 lslocks: write error 00:08:01.458 10:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 794050 00:08:01.458 10:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 794050 ']' 00:08:01.458 10:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 794050 00:08:01.458 10:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:01.458 10:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.458 10:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 794050 00:08:01.719 10:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.719 10:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.719 10:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 794050' 00:08:01.719 killing process with pid 794050 00:08:01.719 10:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 794050 00:08:01.719 10:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 794050 00:08:01.979 10:35:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 794227 00:08:01.979 10:35:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 794227 ']' 00:08:01.979 10:35:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 794227 00:08:01.979 10:35:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:01.979 10:35:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.979 10:35:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 794227 00:08:01.979 10:35:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.979 10:35:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.979 10:35:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 794227' 00:08:01.979 killing process with pid 794227 00:08:01.979 10:35:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 794227 00:08:01.979 10:35:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 794227 00:08:02.240 00:08:02.240 real 0m2.977s 00:08:02.240 user 0m3.315s 00:08:02.240 sys 0m0.905s 00:08:02.240 10:35:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.241 10:35:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:02.241 ************************************ 00:08:02.241 END TEST non_locking_app_on_locked_coremask 00:08:02.241 ************************************ 00:08:02.241 10:35:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:02.241 10:35:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.241 10:35:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.241 10:35:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:02.241 ************************************ 00:08:02.241 START TEST locking_app_on_unlocked_coremask 00:08:02.241 ************************************ 00:08:02.241 10:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:08:02.241 10:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=794616 00:08:02.241 10:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 794616 /var/tmp/spdk.sock 00:08:02.241 10:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:02.241 10:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 794616 ']' 00:08:02.241 10:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.241 10:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.241 10:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.241 10:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.241 10:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:02.502 [2024-11-19 10:35:41.446853] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:08:02.502 [2024-11-19 10:35:41.446912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid794616 ] 00:08:02.502 [2024-11-19 10:35:41.534041] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:02.502 [2024-11-19 10:35:41.534066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.502 [2024-11-19 10:35:41.566543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.074 10:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.074 10:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:03.074 10:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:03.074 10:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=794933 00:08:03.074 10:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 794933 /var/tmp/spdk2.sock 00:08:03.074 10:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 794933 ']' 00:08:03.074 10:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:03.074 10:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.074 10:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:03.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:03.074 10:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.074 10:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.335 [2024-11-19 10:35:42.273492] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:08:03.335 [2024-11-19 10:35:42.273560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid794933 ] 00:08:03.335 [2024-11-19 10:35:42.359521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.335 [2024-11-19 10:35:42.417635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.907 10:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.907 10:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:03.907 10:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 794933 00:08:03.907 10:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 794933 00:08:03.907 10:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:04.479 lslocks: write error 00:08:04.479 10:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 794616 00:08:04.479 10:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 794616 ']' 00:08:04.479 10:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 794616 00:08:04.479 10:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:04.479 10:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.479 10:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 794616 00:08:04.479 10:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.479 10:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.479 10:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 794616' 00:08:04.479 killing process with pid 794616 00:08:04.479 10:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 794616 00:08:04.479 10:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 794616 00:08:05.052 10:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 794933 00:08:05.052 10:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 794933 ']' 00:08:05.052 10:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 794933 00:08:05.052 10:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:05.052 10:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.052 10:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 794933 00:08:05.052 10:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.052 10:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.052 10:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 794933' 00:08:05.052 killing process with pid 794933 00:08:05.052 10:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 794933 00:08:05.052 10:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 794933 00:08:05.313 00:08:05.313 real 0m2.868s 00:08:05.313 user 0m3.189s 00:08:05.313 sys 0m0.866s 00:08:05.313 10:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.313 10:35:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:05.313 ************************************ 00:08:05.313 END TEST locking_app_on_unlocked_coremask 00:08:05.313 ************************************ 00:08:05.313 10:35:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:05.313 10:35:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.313 10:35:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.313 10:35:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:05.313 ************************************ 00:08:05.313 START TEST locking_app_on_locked_coremask 00:08:05.313 ************************************ 00:08:05.313 10:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:08:05.313 10:35:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=795313 00:08:05.313 10:35:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 795313 /var/tmp/spdk.sock 00:08:05.313 10:35:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:05.313 10:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 795313 ']' 00:08:05.313 10:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.313 10:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.313 10:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.313 10:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.313 10:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:05.313 [2024-11-19 10:35:44.386478] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:08:05.313 [2024-11-19 10:35:44.386524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid795313 ] 00:08:05.313 [2024-11-19 10:35:44.468219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.313 [2024-11-19 10:35:44.499127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.259 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.259 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:06.259 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=795531 00:08:06.259 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 795531 /var/tmp/spdk2.sock 00:08:06.259 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:06.259 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:06.259 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 795531 /var/tmp/spdk2.sock 00:08:06.259 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:06.259 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.259 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:06.259 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.259 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 795531 /var/tmp/spdk2.sock 00:08:06.259 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 795531 ']' 00:08:06.259 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:06.259 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.259 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:06.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:06.259 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.259 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:06.259 [2024-11-19 10:35:45.245104] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:08:06.259 [2024-11-19 10:35:45.245163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid795531 ] 00:08:06.259 [2024-11-19 10:35:45.332092] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 795313 has claimed it. 00:08:06.259 [2024-11-19 10:35:45.332128] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:06.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (795531) - No such process 00:08:06.831 ERROR: process (pid: 795531) is no longer running 00:08:06.831 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.831 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:06.831 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:06.831 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:06.831 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:06.831 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:06.831 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 795313 00:08:06.831 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 795313 00:08:06.831 10:35:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:07.403 lslocks: write error 00:08:07.403 10:35:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 795313 00:08:07.403 10:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 795313 ']' 00:08:07.403 10:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 795313 00:08:07.403 10:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:07.403 10:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.403 10:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 795313 00:08:07.403 10:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.403 10:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.403 10:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 795313' 00:08:07.403 killing process with pid 795313 00:08:07.403 10:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 795313 00:08:07.403 10:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 795313 00:08:07.403 00:08:07.404 real 0m2.231s 00:08:07.404 user 0m2.540s 00:08:07.404 sys 0m0.627s 00:08:07.404 10:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.404 10:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:07.404 ************************************ 00:08:07.404 END TEST locking_app_on_locked_coremask 00:08:07.404 ************************************ 00:08:07.665 10:35:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:07.665 10:35:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.665 10:35:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.665 10:35:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:07.665 ************************************ 00:08:07.665 START TEST locking_overlapped_coremask 00:08:07.665 ************************************ 00:08:07.665 10:35:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:07.665 10:35:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=795776 00:08:07.665 10:35:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 795776 /var/tmp/spdk.sock 00:08:07.665 10:35:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:07.665 10:35:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 795776 ']' 00:08:07.665 10:35:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.665 10:35:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.665 10:35:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.665 10:35:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.665 10:35:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:07.665 [2024-11-19 10:35:46.695384] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:08:07.665 [2024-11-19 10:35:46.695434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid795776 ] 00:08:07.665 [2024-11-19 10:35:46.780090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:07.665 [2024-11-19 10:35:46.820674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.665 [2024-11-19 10:35:46.820737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.665 [2024-11-19 10:35:46.820739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.608 10:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.608 10:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:08.609 10:35:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=796025 00:08:08.609 10:35:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:08.609 10:35:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 796025 /var/tmp/spdk2.sock 00:08:08.609 10:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:08.609 10:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 796025 /var/tmp/spdk2.sock 00:08:08.609 10:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:08.609 10:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.609 10:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:08.609 10:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.609 10:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 796025 /var/tmp/spdk2.sock 00:08:08.609 10:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 796025 ']' 00:08:08.609 10:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:08.609 10:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.609 10:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:08.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:08.609 10:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.609 10:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:08.609 [2024-11-19 10:35:47.545862] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:08:08.609 [2024-11-19 10:35:47.545916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid796025 ] 00:08:08.609 [2024-11-19 10:35:47.657350] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 795776 has claimed it. 00:08:08.609 [2024-11-19 10:35:47.657394] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:09.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (796025) - No such process 00:08:09.180 ERROR: process (pid: 796025) is no longer running 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 795776 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 795776 ']' 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 795776 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 795776 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 795776' 00:08:09.180 killing process with pid 795776 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 795776 00:08:09.180 10:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 795776 00:08:09.441 00:08:09.441 real 0m1.783s 00:08:09.441 user 0m5.151s 00:08:09.441 sys 0m0.383s 00:08:09.441 10:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.441 10:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:09.441 ************************************ 00:08:09.441 END TEST locking_overlapped_coremask 00:08:09.441 ************************************ 00:08:09.441 10:35:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:09.441 10:35:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.441 10:35:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.441 10:35:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.441 ************************************ 00:08:09.441 START TEST locking_overlapped_coremask_via_rpc 00:08:09.441 ************************************ 00:08:09.441 10:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:09.441 10:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=796229 00:08:09.441 10:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 796229 /var/tmp/spdk.sock 00:08:09.441 10:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:09.442 10:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 796229 ']' 00:08:09.442 10:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.442 10:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.442 10:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.442 10:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.442 10:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.442 [2024-11-19 10:35:48.567074] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:08:09.442 [2024-11-19 10:35:48.567136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid796229 ] 00:08:09.702 [2024-11-19 10:35:48.652845] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:09.702 [2024-11-19 10:35:48.652875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:09.702 [2024-11-19 10:35:48.688422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.702 [2024-11-19 10:35:48.688625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.702 [2024-11-19 10:35:48.688626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.273 10:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.273 10:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:10.273 10:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=796399 00:08:10.273 10:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 796399 /var/tmp/spdk2.sock 00:08:10.273 10:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:10.273 10:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 796399 ']' 00:08:10.273 10:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:10.273 10:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.273 10:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:10.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:10.273 10:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.273 10:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.273 [2024-11-19 10:35:49.416682] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:08:10.273 [2024-11-19 10:35:49.416735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid796399 ] 00:08:10.534 [2024-11-19 10:35:49.529558] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:10.534 [2024-11-19 10:35:49.529588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:10.534 [2024-11-19 10:35:49.603119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.534 [2024-11-19 10:35:49.606280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.534 [2024-11-19 10:35:49.606280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.105 [2024-11-19 10:35:50.218249] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 796229 has claimed it. 00:08:11.105 request: 00:08:11.105 { 00:08:11.105 "method": "framework_enable_cpumask_locks", 00:08:11.105 "req_id": 1 00:08:11.105 } 00:08:11.105 Got JSON-RPC error response 00:08:11.105 response: 00:08:11.105 { 00:08:11.105 "code": -32603, 00:08:11.105 "message": "Failed to claim CPU core: 2" 00:08:11.105 } 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 796229 /var/tmp/spdk.sock 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 796229 ']' 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.105 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.366 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.366 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:11.366 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 796399 /var/tmp/spdk2.sock 00:08:11.366 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 796399 ']' 00:08:11.366 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:11.366 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.366 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:11.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:11.366 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.366 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.627 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.627 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:11.627 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:11.627 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:11.627 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:11.627 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:11.627 00:08:11.627 real 0m2.105s 00:08:11.627 user 0m0.863s 00:08:11.627 sys 0m0.160s 00:08:11.627 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.627 10:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.627 ************************************ 00:08:11.627 END TEST locking_overlapped_coremask_via_rpc 00:08:11.627 ************************************ 00:08:11.627 10:35:50 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:11.627 10:35:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 796229 ]] 00:08:11.627 10:35:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 796229 00:08:11.627 10:35:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 796229 ']' 00:08:11.627 10:35:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 796229 00:08:11.627 10:35:50 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:11.627 10:35:50 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.627 10:35:50 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 796229 00:08:11.627 10:35:50 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.627 10:35:50 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.627 10:35:50 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 796229' 00:08:11.627 killing process with pid 796229 00:08:11.627 10:35:50 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 796229 00:08:11.627 10:35:50 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 796229 00:08:11.888 10:35:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 796399 ]] 00:08:11.888 10:35:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 796399 00:08:11.888 10:35:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 796399 ']' 00:08:11.888 10:35:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 796399 00:08:11.888 10:35:50 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:11.888 10:35:50 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.888 10:35:50 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 796399 00:08:11.888 10:35:50 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:11.888 10:35:50 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:11.888 10:35:50 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 796399' 00:08:11.888 killing process with pid 796399 00:08:11.888 10:35:50 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 796399 00:08:11.888 10:35:50 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 796399 00:08:12.149 10:35:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:12.149 10:35:51 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:12.149 10:35:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 796229 ]] 00:08:12.149 10:35:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 796229 00:08:12.149 10:35:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 796229 ']' 00:08:12.149 10:35:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 796229 00:08:12.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (796229) - No such process 00:08:12.149 10:35:51 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 796229 is not found' 00:08:12.149 Process with pid 796229 is not found 00:08:12.149 10:35:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 796399 ]] 00:08:12.149 10:35:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 796399 00:08:12.149 10:35:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 796399 ']' 00:08:12.149 10:35:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 796399 00:08:12.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (796399) - No such process 00:08:12.149 10:35:51 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 796399 is not found' 00:08:12.149 Process with pid 796399 is not found 00:08:12.149 10:35:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:12.149 00:08:12.149 real 0m16.309s 00:08:12.149 user 0m28.509s 00:08:12.149 sys 0m4.990s 00:08:12.149 10:35:51 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.149 10:35:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:12.149 ************************************ 00:08:12.149 END TEST cpu_locks 00:08:12.149 ************************************ 00:08:12.149 00:08:12.149 real 0m42.179s 00:08:12.149 user 1m22.809s 00:08:12.149 sys 0m8.316s 00:08:12.149 10:35:51 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.149 10:35:51 event -- common/autotest_common.sh@10 -- # set +x 00:08:12.149 ************************************ 00:08:12.149 END TEST event 00:08:12.149 ************************************ 00:08:12.149 10:35:51 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:12.149 10:35:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:12.149 10:35:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.149 10:35:51 -- common/autotest_common.sh@10 -- # set +x 00:08:12.149 ************************************ 00:08:12.149 START TEST thread 00:08:12.149 ************************************ 00:08:12.149 10:35:51 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:12.410 * Looking for test storage... 00:08:12.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:12.410 10:35:51 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:12.410 10:35:51 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:08:12.410 10:35:51 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:12.410 10:35:51 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:12.410 10:35:51 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.410 10:35:51 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.410 10:35:51 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.410 10:35:51 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.410 10:35:51 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.410 10:35:51 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.410 10:35:51 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.410 10:35:51 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.410 10:35:51 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.410 10:35:51 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.410 10:35:51 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.410 10:35:51 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:12.410 10:35:51 thread -- scripts/common.sh@345 -- # : 1 00:08:12.410 10:35:51 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.410 10:35:51 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.410 10:35:51 thread -- scripts/common.sh@365 -- # decimal 1 00:08:12.410 10:35:51 thread -- scripts/common.sh@353 -- # local d=1 00:08:12.410 10:35:51 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.410 10:35:51 thread -- scripts/common.sh@355 -- # echo 1 00:08:12.410 10:35:51 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.410 10:35:51 thread -- scripts/common.sh@366 -- # decimal 2 00:08:12.410 10:35:51 thread -- scripts/common.sh@353 -- # local d=2 00:08:12.410 10:35:51 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.410 10:35:51 thread -- scripts/common.sh@355 -- # echo 2 00:08:12.410 10:35:51 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.410 10:35:51 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.410 10:35:51 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.410 10:35:51 thread -- scripts/common.sh@368 -- # return 0 00:08:12.410 10:35:51 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.410 10:35:51 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:12.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.410 --rc genhtml_branch_coverage=1 00:08:12.410 --rc genhtml_function_coverage=1 00:08:12.410 --rc genhtml_legend=1 00:08:12.410 --rc geninfo_all_blocks=1 00:08:12.410 --rc geninfo_unexecuted_blocks=1 00:08:12.410 00:08:12.410 ' 00:08:12.410 10:35:51 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:12.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.410 --rc genhtml_branch_coverage=1 00:08:12.410 --rc genhtml_function_coverage=1 00:08:12.410 --rc genhtml_legend=1 00:08:12.410 --rc geninfo_all_blocks=1 00:08:12.410 --rc geninfo_unexecuted_blocks=1 00:08:12.410 00:08:12.410 ' 00:08:12.410 10:35:51 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:12.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.410 --rc genhtml_branch_coverage=1 00:08:12.410 --rc genhtml_function_coverage=1 00:08:12.410 --rc genhtml_legend=1 00:08:12.410 --rc geninfo_all_blocks=1 00:08:12.410 --rc geninfo_unexecuted_blocks=1 00:08:12.410 00:08:12.410 ' 00:08:12.410 10:35:51 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:12.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.410 --rc genhtml_branch_coverage=1 00:08:12.410 --rc genhtml_function_coverage=1 00:08:12.410 --rc genhtml_legend=1 00:08:12.410 --rc geninfo_all_blocks=1 00:08:12.410 --rc geninfo_unexecuted_blocks=1 00:08:12.410 00:08:12.410 ' 00:08:12.410 10:35:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:12.410 10:35:51 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:12.410 10:35:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.410 10:35:51 thread -- common/autotest_common.sh@10 -- # set +x 00:08:12.410 ************************************ 00:08:12.410 START TEST thread_poller_perf 00:08:12.410 ************************************ 00:08:12.410 10:35:51 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:12.410 [2024-11-19 10:35:51.593375] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:08:12.410 [2024-11-19 10:35:51.593489] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid796860 ] 00:08:12.672 [2024-11-19 10:35:51.684380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.672 [2024-11-19 10:35:51.724393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.672 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:13.614 [2024-11-19T09:35:52.809Z] ====================================== 00:08:13.614 [2024-11-19T09:35:52.809Z] busy:2404989080 (cyc) 00:08:13.614 [2024-11-19T09:35:52.809Z] total_run_count: 416000 00:08:13.614 [2024-11-19T09:35:52.809Z] tsc_hz: 2400000000 (cyc) 00:08:13.614 [2024-11-19T09:35:52.809Z] ====================================== 00:08:13.614 [2024-11-19T09:35:52.809Z] poller_cost: 5781 (cyc), 2408 (nsec) 00:08:13.614 00:08:13.614 real 0m1.185s 00:08:13.614 user 0m1.103s 00:08:13.614 sys 0m0.077s 00:08:13.614 10:35:52 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.614 10:35:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:13.614 ************************************ 00:08:13.614 END TEST thread_poller_perf 00:08:13.614 ************************************ 00:08:13.614 10:35:52 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:13.614 10:35:52 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:13.614 10:35:52 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.614 10:35:52 thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.875 ************************************ 00:08:13.875 START TEST thread_poller_perf 00:08:13.875 ************************************ 00:08:13.875 10:35:52 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:13.875 [2024-11-19 10:35:52.858477] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:08:13.875 [2024-11-19 10:35:52.858582] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid797198 ] 00:08:13.875 [2024-11-19 10:35:52.946092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.875 [2024-11-19 10:35:52.977142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.875 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:14.816 [2024-11-19T09:35:54.011Z] ====================================== 00:08:14.816 [2024-11-19T09:35:54.011Z] busy:2401555904 (cyc) 00:08:14.816 [2024-11-19T09:35:54.011Z] total_run_count: 5562000 00:08:14.816 [2024-11-19T09:35:54.011Z] tsc_hz: 2400000000 (cyc) 00:08:14.816 [2024-11-19T09:35:54.011Z] ====================================== 00:08:14.816 [2024-11-19T09:35:54.011Z] poller_cost: 431 (cyc), 179 (nsec) 00:08:14.816 00:08:14.816 real 0m1.168s 00:08:14.816 user 0m1.089s 00:08:14.816 sys 0m0.075s 00:08:14.816 10:35:54 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.816 10:35:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:14.816 ************************************ 00:08:14.816 END TEST thread_poller_perf 00:08:14.816 ************************************ 00:08:15.077 10:35:54 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:15.077 00:08:15.077 real 0m2.708s 00:08:15.077 user 0m2.373s 00:08:15.077 sys 0m0.346s 00:08:15.077 10:35:54 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.077 10:35:54 thread -- common/autotest_common.sh@10 -- # set +x 00:08:15.077 ************************************ 00:08:15.077 END TEST thread 00:08:15.077 ************************************ 00:08:15.077 10:35:54 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:15.077 10:35:54 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:15.077 10:35:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.077 10:35:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.077 10:35:54 -- common/autotest_common.sh@10 -- # set +x 00:08:15.077 ************************************ 00:08:15.077 START TEST app_cmdline 00:08:15.077 ************************************ 00:08:15.077 10:35:54 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:15.077 * Looking for test storage... 00:08:15.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:15.077 10:35:54 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:15.077 10:35:54 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:08:15.077 10:35:54 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:15.338 10:35:54 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.338 10:35:54 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:15.338 10:35:54 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.338 10:35:54 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:15.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.338 --rc genhtml_branch_coverage=1 00:08:15.338 --rc genhtml_function_coverage=1 00:08:15.338 --rc genhtml_legend=1 00:08:15.338 --rc geninfo_all_blocks=1 00:08:15.338 --rc geninfo_unexecuted_blocks=1 00:08:15.338 00:08:15.338 ' 00:08:15.338 10:35:54 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:15.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.339 --rc genhtml_branch_coverage=1 00:08:15.339 --rc genhtml_function_coverage=1 00:08:15.339 --rc genhtml_legend=1 00:08:15.339 --rc geninfo_all_blocks=1 00:08:15.339 --rc geninfo_unexecuted_blocks=1 00:08:15.339 00:08:15.339 ' 00:08:15.339 10:35:54 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:15.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.339 --rc genhtml_branch_coverage=1 00:08:15.339 --rc genhtml_function_coverage=1 00:08:15.339 --rc genhtml_legend=1 00:08:15.339 --rc geninfo_all_blocks=1 00:08:15.339 --rc geninfo_unexecuted_blocks=1 00:08:15.339 00:08:15.339 ' 00:08:15.339 10:35:54 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:15.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.339 --rc genhtml_branch_coverage=1 00:08:15.339 --rc genhtml_function_coverage=1 00:08:15.339 --rc genhtml_legend=1 00:08:15.339 --rc geninfo_all_blocks=1 00:08:15.339 --rc geninfo_unexecuted_blocks=1 00:08:15.339 00:08:15.339 ' 00:08:15.339 10:35:54 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:15.339 10:35:54 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=797596 00:08:15.339 10:35:54 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 797596 00:08:15.339 10:35:54 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:15.339 10:35:54 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 797596 ']' 00:08:15.339 10:35:54 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.339 10:35:54 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.339 10:35:54 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.339 10:35:54 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.339 10:35:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:15.339 [2024-11-19 10:35:54.375465] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:08:15.339 [2024-11-19 10:35:54.375539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid797596 ] 00:08:15.339 [2024-11-19 10:35:54.459444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.339 [2024-11-19 10:35:54.494517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.282 10:35:55 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.282 10:35:55 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:16.282 10:35:55 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:16.282 { 00:08:16.282 "version": "SPDK v25.01-pre git sha1 03b7aa9c7", 00:08:16.282 "fields": { 00:08:16.282 "major": 25, 00:08:16.282 "minor": 1, 00:08:16.282 "patch": 0, 00:08:16.282 "suffix": "-pre", 00:08:16.282 "commit": "03b7aa9c7" 00:08:16.282 } 00:08:16.282 } 00:08:16.282 10:35:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:16.282 10:35:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:16.282 10:35:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:16.282 10:35:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:16.282 10:35:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:16.282 10:35:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:16.282 10:35:55 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.282 10:35:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:16.282 10:35:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:16.282 10:35:55 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.282 10:35:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:16.282 10:35:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:16.282 10:35:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:16.282 10:35:55 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:16.282 10:35:55 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:16.282 10:35:55 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:16.282 10:35:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.282 10:35:55 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:16.282 10:35:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.282 10:35:55 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:16.282 10:35:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.282 10:35:55 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:16.282 10:35:55 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:16.282 10:35:55 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:16.543 request: 00:08:16.543 { 00:08:16.543 "method": "env_dpdk_get_mem_stats", 00:08:16.543 "req_id": 1 00:08:16.543 } 00:08:16.543 Got JSON-RPC error response 00:08:16.543 response: 00:08:16.543 { 00:08:16.543 "code": -32601, 00:08:16.543 "message": "Method not found" 00:08:16.543 } 00:08:16.543 10:35:55 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:16.543 10:35:55 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:16.543 10:35:55 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:16.543 10:35:55 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:16.543 10:35:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 797596 00:08:16.543 10:35:55 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 797596 ']' 00:08:16.543 10:35:55 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 797596 00:08:16.543 10:35:55 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:16.543 10:35:55 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.543 10:35:55 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 797596 00:08:16.543 10:35:55 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.543 10:35:55 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.543 10:35:55 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 797596' 00:08:16.543 killing process with pid 797596 00:08:16.543 10:35:55 app_cmdline -- common/autotest_common.sh@973 -- # kill 797596 00:08:16.543 10:35:55 app_cmdline -- common/autotest_common.sh@978 -- # wait 797596 00:08:16.805 00:08:16.805 real 0m1.668s 00:08:16.805 user 0m1.981s 00:08:16.805 sys 0m0.463s 00:08:16.805 10:35:55 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.805 10:35:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:16.805 ************************************ 00:08:16.805 END TEST app_cmdline 00:08:16.805 ************************************ 00:08:16.805 10:35:55 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:16.805 10:35:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.805 10:35:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.805 10:35:55 -- common/autotest_common.sh@10 -- # set +x 00:08:16.805 ************************************ 00:08:16.805 START TEST version 00:08:16.805 ************************************ 00:08:16.805 10:35:55 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:16.805 * Looking for test storage... 00:08:16.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:16.805 10:35:55 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:16.805 10:35:55 version -- common/autotest_common.sh@1693 -- # lcov --version 00:08:16.805 10:35:55 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:17.066 10:35:56 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:17.066 10:35:56 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.066 10:35:56 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.066 10:35:56 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.066 10:35:56 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.066 10:35:56 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.066 10:35:56 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.066 10:35:56 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.066 10:35:56 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.066 10:35:56 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.066 10:35:56 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.066 10:35:56 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.066 10:35:56 version -- scripts/common.sh@344 -- # case "$op" in 00:08:17.066 10:35:56 version -- scripts/common.sh@345 -- # : 1 00:08:17.066 10:35:56 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.066 10:35:56 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.066 10:35:56 version -- scripts/common.sh@365 -- # decimal 1 00:08:17.066 10:35:56 version -- scripts/common.sh@353 -- # local d=1 00:08:17.066 10:35:56 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.066 10:35:56 version -- scripts/common.sh@355 -- # echo 1 00:08:17.066 10:35:56 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.066 10:35:56 version -- scripts/common.sh@366 -- # decimal 2 00:08:17.066 10:35:56 version -- scripts/common.sh@353 -- # local d=2 00:08:17.066 10:35:56 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.066 10:35:56 version -- scripts/common.sh@355 -- # echo 2 00:08:17.066 10:35:56 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.066 10:35:56 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.066 10:35:56 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.066 10:35:56 version -- scripts/common.sh@368 -- # return 0 00:08:17.066 10:35:56 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.066 10:35:56 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:17.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.066 --rc genhtml_branch_coverage=1 00:08:17.066 --rc genhtml_function_coverage=1 00:08:17.066 --rc genhtml_legend=1 00:08:17.066 --rc geninfo_all_blocks=1 00:08:17.066 --rc geninfo_unexecuted_blocks=1 00:08:17.066 00:08:17.066 ' 00:08:17.066 10:35:56 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:17.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.066 --rc genhtml_branch_coverage=1 00:08:17.066 --rc genhtml_function_coverage=1 00:08:17.066 --rc genhtml_legend=1 00:08:17.066 --rc geninfo_all_blocks=1 00:08:17.066 --rc geninfo_unexecuted_blocks=1 00:08:17.066 00:08:17.066 ' 00:08:17.066 10:35:56 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:17.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.066 --rc genhtml_branch_coverage=1 00:08:17.066 --rc genhtml_function_coverage=1 00:08:17.066 --rc genhtml_legend=1 00:08:17.066 --rc geninfo_all_blocks=1 00:08:17.066 --rc geninfo_unexecuted_blocks=1 00:08:17.066 00:08:17.066 ' 00:08:17.066 10:35:56 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:17.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.066 --rc genhtml_branch_coverage=1 00:08:17.066 --rc genhtml_function_coverage=1 00:08:17.066 --rc genhtml_legend=1 00:08:17.066 --rc geninfo_all_blocks=1 00:08:17.066 --rc geninfo_unexecuted_blocks=1 00:08:17.066 00:08:17.066 ' 00:08:17.066 10:35:56 version -- app/version.sh@17 -- # get_header_version major 00:08:17.066 10:35:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:17.066 10:35:56 version -- app/version.sh@14 -- # cut -f2 00:08:17.066 10:35:56 version -- app/version.sh@14 -- # tr -d '"' 00:08:17.066 10:35:56 version -- app/version.sh@17 -- # major=25 00:08:17.066 10:35:56 version -- app/version.sh@18 -- # get_header_version minor 00:08:17.066 10:35:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:17.066 10:35:56 version -- app/version.sh@14 -- # cut -f2 00:08:17.066 10:35:56 version -- app/version.sh@14 -- # tr -d '"' 00:08:17.066 10:35:56 version -- app/version.sh@18 -- # minor=1 00:08:17.066 10:35:56 version -- app/version.sh@19 -- # get_header_version patch 00:08:17.066 10:35:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:17.066 10:35:56 version -- app/version.sh@14 -- # cut -f2 00:08:17.066 10:35:56 version -- app/version.sh@14 -- # tr -d '"' 00:08:17.066 10:35:56 version -- app/version.sh@19 -- # patch=0 00:08:17.066 10:35:56 version -- app/version.sh@20 -- # get_header_version suffix 00:08:17.066 10:35:56 version -- app/version.sh@14 -- # cut -f2 00:08:17.066 10:35:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:17.066 10:35:56 version -- app/version.sh@14 -- # tr -d '"' 00:08:17.066 10:35:56 version -- app/version.sh@20 -- # suffix=-pre 00:08:17.066 10:35:56 version -- app/version.sh@22 -- # version=25.1 00:08:17.066 10:35:56 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:17.066 10:35:56 version -- app/version.sh@28 -- # version=25.1rc0 00:08:17.066 10:35:56 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:17.066 10:35:56 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:17.066 10:35:56 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:17.066 10:35:56 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:17.066 00:08:17.066 real 0m0.279s 00:08:17.066 user 0m0.165s 00:08:17.066 sys 0m0.158s 00:08:17.066 10:35:56 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.066 10:35:56 version -- common/autotest_common.sh@10 -- # set +x 00:08:17.066 ************************************ 00:08:17.066 END TEST version 00:08:17.066 ************************************ 00:08:17.066 10:35:56 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:17.066 10:35:56 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:17.066 10:35:56 -- spdk/autotest.sh@194 -- # uname -s 00:08:17.066 10:35:56 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:17.066 10:35:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:17.066 10:35:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:17.066 10:35:56 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:17.066 10:35:56 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:17.066 10:35:56 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:17.066 10:35:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:17.066 10:35:56 -- common/autotest_common.sh@10 -- # set +x 00:08:17.066 10:35:56 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:17.066 10:35:56 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:08:17.066 10:35:56 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:08:17.066 10:35:56 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:08:17.066 10:35:56 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:08:17.066 10:35:56 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:08:17.066 10:35:56 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:17.066 10:35:56 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:17.067 10:35:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.067 10:35:56 -- common/autotest_common.sh@10 -- # set +x 00:08:17.328 ************************************ 00:08:17.328 START TEST nvmf_tcp 00:08:17.328 ************************************ 00:08:17.328 10:35:56 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:17.328 * Looking for test storage... 00:08:17.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:17.328 10:35:56 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:17.328 10:35:56 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:17.328 10:35:56 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:17.328 10:35:56 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.328 10:35:56 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:17.328 10:35:56 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.328 10:35:56 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:17.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.328 --rc genhtml_branch_coverage=1 00:08:17.328 --rc genhtml_function_coverage=1 00:08:17.328 --rc genhtml_legend=1 00:08:17.328 --rc geninfo_all_blocks=1 00:08:17.328 --rc geninfo_unexecuted_blocks=1 00:08:17.328 00:08:17.328 ' 00:08:17.328 10:35:56 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:17.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.328 --rc genhtml_branch_coverage=1 00:08:17.328 --rc genhtml_function_coverage=1 00:08:17.328 --rc genhtml_legend=1 00:08:17.328 --rc geninfo_all_blocks=1 00:08:17.328 --rc geninfo_unexecuted_blocks=1 00:08:17.328 00:08:17.328 ' 00:08:17.328 10:35:56 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:17.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.328 --rc genhtml_branch_coverage=1 00:08:17.328 --rc genhtml_function_coverage=1 00:08:17.328 --rc genhtml_legend=1 00:08:17.328 --rc geninfo_all_blocks=1 00:08:17.328 --rc geninfo_unexecuted_blocks=1 00:08:17.328 00:08:17.328 ' 00:08:17.328 10:35:56 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:17.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.328 --rc genhtml_branch_coverage=1 00:08:17.328 --rc genhtml_function_coverage=1 00:08:17.328 --rc genhtml_legend=1 00:08:17.328 --rc geninfo_all_blocks=1 00:08:17.328 --rc geninfo_unexecuted_blocks=1 00:08:17.328 00:08:17.328 ' 00:08:17.328 10:35:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:17.329 10:35:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:17.329 10:35:56 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:17.329 10:35:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:17.329 10:35:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.329 10:35:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:17.329 ************************************ 00:08:17.329 START TEST nvmf_target_core 00:08:17.329 ************************************ 00:08:17.329 10:35:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:17.590 * Looking for test storage... 00:08:17.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:17.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.591 --rc genhtml_branch_coverage=1 00:08:17.591 --rc genhtml_function_coverage=1 00:08:17.591 --rc genhtml_legend=1 00:08:17.591 --rc geninfo_all_blocks=1 00:08:17.591 --rc geninfo_unexecuted_blocks=1 00:08:17.591 00:08:17.591 ' 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:17.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.591 --rc genhtml_branch_coverage=1 00:08:17.591 --rc genhtml_function_coverage=1 00:08:17.591 --rc genhtml_legend=1 00:08:17.591 --rc geninfo_all_blocks=1 00:08:17.591 --rc geninfo_unexecuted_blocks=1 00:08:17.591 00:08:17.591 ' 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:17.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.591 --rc genhtml_branch_coverage=1 00:08:17.591 --rc genhtml_function_coverage=1 00:08:17.591 --rc genhtml_legend=1 00:08:17.591 --rc geninfo_all_blocks=1 00:08:17.591 --rc geninfo_unexecuted_blocks=1 00:08:17.591 00:08:17.591 ' 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:17.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.591 --rc genhtml_branch_coverage=1 00:08:17.591 --rc genhtml_function_coverage=1 00:08:17.591 --rc genhtml_legend=1 00:08:17.591 --rc geninfo_all_blocks=1 00:08:17.591 --rc geninfo_unexecuted_blocks=1 00:08:17.591 00:08:17.591 ' 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:17.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.591 10:35:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:17.854 ************************************ 00:08:17.854 START TEST nvmf_abort 00:08:17.854 ************************************ 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:17.854 * Looking for test storage... 00:08:17.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:17.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.854 --rc genhtml_branch_coverage=1 00:08:17.854 --rc genhtml_function_coverage=1 00:08:17.854 --rc genhtml_legend=1 00:08:17.854 --rc geninfo_all_blocks=1 00:08:17.854 --rc geninfo_unexecuted_blocks=1 00:08:17.854 00:08:17.854 ' 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:17.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.854 --rc genhtml_branch_coverage=1 00:08:17.854 --rc genhtml_function_coverage=1 00:08:17.854 --rc genhtml_legend=1 00:08:17.854 --rc geninfo_all_blocks=1 00:08:17.854 --rc geninfo_unexecuted_blocks=1 00:08:17.854 00:08:17.854 ' 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:17.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.854 --rc genhtml_branch_coverage=1 00:08:17.854 --rc genhtml_function_coverage=1 00:08:17.854 --rc genhtml_legend=1 00:08:17.854 --rc geninfo_all_blocks=1 00:08:17.854 --rc geninfo_unexecuted_blocks=1 00:08:17.854 00:08:17.854 ' 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:17.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.854 --rc genhtml_branch_coverage=1 00:08:17.854 --rc genhtml_function_coverage=1 00:08:17.854 --rc genhtml_legend=1 00:08:17.854 --rc geninfo_all_blocks=1 00:08:17.854 --rc geninfo_unexecuted_blocks=1 00:08:17.854 00:08:17.854 ' 00:08:17.854 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.854 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:17.854 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.854 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.854 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.854 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.854 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.854 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.854 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.854 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.854 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.854 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.854 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:17.854 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:17.854 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.854 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.854 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.854 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.854 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:17.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:08:17.855 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:25.994 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:25.994 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:25.994 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:25.994 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:25.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:08:25.994 00:08:25.994 --- 10.0.0.2 ping statistics --- 00:08:25.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.994 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:08:25.994 00:08:25.994 --- 10.0.0.1 ping statistics --- 00:08:25.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.994 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=802189 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 802189 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 802189 ']' 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.994 10:36:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:25.994 [2024-11-19 10:36:04.623714] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:08:25.994 [2024-11-19 10:36:04.623779] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.994 [2024-11-19 10:36:04.724723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:25.994 [2024-11-19 10:36:04.778853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.994 [2024-11-19 10:36:04.778909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.994 [2024-11-19 10:36:04.778923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.994 [2024-11-19 10:36:04.778933] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.994 [2024-11-19 10:36:04.778943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.994 [2024-11-19 10:36:04.780804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.994 [2024-11-19 10:36:04.780964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.994 [2024-11-19 10:36:04.780966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.566 [2024-11-19 10:36:05.507352] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.566 Malloc0 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.566 Delay0 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.566 [2024-11-19 10:36:05.596600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.566 10:36:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:26.566 [2024-11-19 10:36:05.747765] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:29.111 Initializing NVMe Controllers 00:08:29.111 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:29.111 controller IO queue size 128 less than required 00:08:29.111 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:29.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:29.111 Initialization complete. Launching workers. 00:08:29.111 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28533 00:08:29.111 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28594, failed to submit 62 00:08:29.111 success 28537, unsuccessful 57, failed 0 00:08:29.111 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:29.111 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.111 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.111 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.111 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:29.111 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:29.111 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:29.111 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:08:29.111 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:29.111 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:08:29.111 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:29.111 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:29.111 rmmod nvme_tcp 00:08:29.111 rmmod nvme_fabrics 00:08:29.111 rmmod nvme_keyring 00:08:29.111 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:29.111 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:08:29.111 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:08:29.111 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 802189 ']' 00:08:29.111 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 802189 00:08:29.111 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 802189 ']' 00:08:29.112 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 802189 00:08:29.112 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:08:29.112 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.112 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 802189 00:08:29.112 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:29.112 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:29.112 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 802189' 00:08:29.112 killing process with pid 802189 00:08:29.112 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 802189 00:08:29.112 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 802189 00:08:29.112 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:29.112 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:29.112 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:29.112 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:08:29.112 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:08:29.112 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:29.112 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:08:29.112 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:29.112 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:29.112 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.112 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.112 10:36:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.026 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:31.026 00:08:31.026 real 0m13.335s 00:08:31.026 user 0m13.842s 00:08:31.026 sys 0m6.544s 00:08:31.026 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.026 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.026 ************************************ 00:08:31.026 END TEST nvmf_abort 00:08:31.026 ************************************ 00:08:31.026 10:36:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:31.026 10:36:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:31.026 10:36:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.026 10:36:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:31.026 ************************************ 00:08:31.026 START TEST nvmf_ns_hotplug_stress 00:08:31.026 ************************************ 00:08:31.026 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:31.288 * Looking for test storage... 00:08:31.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:31.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.288 --rc genhtml_branch_coverage=1 00:08:31.288 --rc genhtml_function_coverage=1 00:08:31.288 --rc genhtml_legend=1 00:08:31.288 --rc geninfo_all_blocks=1 00:08:31.288 --rc geninfo_unexecuted_blocks=1 00:08:31.288 00:08:31.288 ' 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:31.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.288 --rc genhtml_branch_coverage=1 00:08:31.288 --rc genhtml_function_coverage=1 00:08:31.288 --rc genhtml_legend=1 00:08:31.288 --rc geninfo_all_blocks=1 00:08:31.288 --rc geninfo_unexecuted_blocks=1 00:08:31.288 00:08:31.288 ' 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:31.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.288 --rc genhtml_branch_coverage=1 00:08:31.288 --rc genhtml_function_coverage=1 00:08:31.288 --rc genhtml_legend=1 00:08:31.288 --rc geninfo_all_blocks=1 00:08:31.288 --rc geninfo_unexecuted_blocks=1 00:08:31.288 00:08:31.288 ' 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:31.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.288 --rc genhtml_branch_coverage=1 00:08:31.288 --rc genhtml_function_coverage=1 00:08:31.288 --rc genhtml_legend=1 00:08:31.288 --rc geninfo_all_blocks=1 00:08:31.288 --rc geninfo_unexecuted_blocks=1 00:08:31.288 00:08:31.288 ' 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.288 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:31.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:08:31.289 10:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:39.429 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:39.429 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:39.429 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:39.429 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.429 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:39.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:08:39.430 00:08:39.430 --- 10.0.0.2 ping statistics --- 00:08:39.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.430 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:08:39.430 00:08:39.430 --- 10.0.0.1 ping statistics --- 00:08:39.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.430 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=807437 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 807437 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 807437 ']' 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.430 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.430 [2024-11-19 10:36:17.966487] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:08:39.430 [2024-11-19 10:36:17.966555] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.430 [2024-11-19 10:36:18.065856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:39.430 [2024-11-19 10:36:18.118114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.430 [2024-11-19 10:36:18.118177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.430 [2024-11-19 10:36:18.118191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.430 [2024-11-19 10:36:18.118199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.430 [2024-11-19 10:36:18.118205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.430 [2024-11-19 10:36:18.120205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.430 [2024-11-19 10:36:18.120409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.430 [2024-11-19 10:36:18.120411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.691 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.691 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:08:39.691 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:39.691 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.691 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.691 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.691 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:39.691 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:39.951 [2024-11-19 10:36:19.001248] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.951 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:40.212 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.212 [2024-11-19 10:36:19.396342] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.474 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:40.474 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:40.735 Malloc0 00:08:40.735 10:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:40.995 Delay0 00:08:40.995 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.255 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:41.255 NULL1 00:08:41.255 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:41.516 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:41.516 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=808073 00:08:41.516 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:41.516 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.777 10:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.038 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:42.038 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:42.038 true 00:08:42.038 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:42.038 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.299 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.561 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:42.561 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:42.561 true 00:08:42.822 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:42.822 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.823 10:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.083 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:43.083 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:43.083 true 00:08:43.344 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:43.344 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.344 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.604 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:43.604 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:43.866 true 00:08:43.866 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:43.866 10:36:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.866 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.128 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:44.128 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:44.388 true 00:08:44.388 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:44.388 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.388 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.648 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:44.649 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:44.909 true 00:08:44.909 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:44.909 10:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.168 10:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.168 10:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:45.169 10:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:45.428 true 00:08:45.428 10:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:45.428 10:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.689 10:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.689 10:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:45.689 10:36:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:45.950 true 00:08:45.950 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:45.950 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.211 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.211 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:46.211 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:46.472 true 00:08:46.472 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:46.472 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.733 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.994 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:46.994 10:36:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:46.994 true 00:08:46.994 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:46.994 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.255 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.517 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:47.517 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:47.517 true 00:08:47.517 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:47.517 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.778 10:36:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.039 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:48.039 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:48.039 true 00:08:48.039 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:48.039 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.300 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.561 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:48.561 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:48.561 true 00:08:48.822 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:48.822 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.822 10:36:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.083 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:49.083 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:49.345 true 00:08:49.345 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:49.345 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.345 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.605 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:49.605 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:49.866 true 00:08:49.866 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:49.866 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.126 10:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.126 10:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:50.126 10:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:50.387 true 00:08:50.387 10:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:50.387 10:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.648 10:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.648 10:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:50.648 10:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:50.909 true 00:08:50.909 10:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:50.909 10:36:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.170 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.430 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:51.430 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:51.430 true 00:08:51.430 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:51.430 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.691 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.953 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:51.953 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:51.953 true 00:08:51.953 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:51.953 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.214 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.475 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:52.475 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:52.475 true 00:08:52.475 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:52.475 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.736 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.996 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:52.996 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:52.996 true 00:08:53.257 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:53.257 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.257 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.517 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:53.517 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:53.777 true 00:08:53.777 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:53.777 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.777 10:36:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.038 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:54.038 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:54.299 true 00:08:54.299 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:54.300 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.300 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.560 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:54.560 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:54.822 true 00:08:54.822 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:54.822 10:36:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.082 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.082 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:55.082 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:55.344 true 00:08:55.344 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:55.344 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.604 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.604 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:55.604 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:55.865 true 00:08:55.865 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:55.865 10:36:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.126 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.126 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:56.126 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:56.388 true 00:08:56.388 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:56.388 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.648 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.909 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:56.909 10:36:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:56.909 true 00:08:56.909 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:56.909 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.170 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.431 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:57.431 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:57.431 true 00:08:57.431 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:57.431 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.692 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.953 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:57.953 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:57.953 true 00:08:57.953 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:57.953 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.213 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.475 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:08:58.475 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:08:58.475 true 00:08:58.475 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:58.475 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.737 10:36:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.997 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:08:58.997 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:08:58.997 true 00:08:58.997 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:58.997 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.258 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.518 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:08:59.518 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:08:59.518 true 00:08:59.780 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:08:59.780 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.780 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.040 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:09:00.040 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:09:00.300 true 00:09:00.300 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:00.300 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.301 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.560 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:09:00.560 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:09:00.820 true 00:09:00.820 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:00.820 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.820 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:09:01.081 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:09:01.341 true 00:09:01.341 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:01.341 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.601 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.601 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:09:01.601 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:09:01.862 true 00:09:01.862 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:01.862 10:36:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.122 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.122 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:09:02.122 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:09:02.382 true 00:09:02.382 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:02.382 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.642 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.642 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:09:02.642 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:09:02.902 true 00:09:02.902 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:02.902 10:36:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.164 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.164 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:09:03.164 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:09:03.424 true 00:09:03.424 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:03.424 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.685 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.946 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:09:03.946 10:36:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:09:03.946 true 00:09:03.946 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:03.946 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.207 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.466 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:09:04.466 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:09:04.466 true 00:09:04.466 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:04.467 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.726 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.987 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:09:04.987 10:36:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:09:04.987 true 00:09:04.987 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:04.987 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.248 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:09:05.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:09:05.509 true 00:09:05.769 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:05.769 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.769 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.029 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:09:06.029 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:09:06.290 true 00:09:06.290 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:06.290 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.290 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.550 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:09:06.550 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:09:06.811 true 00:09:06.811 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:06.811 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.811 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.072 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:09:07.072 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:09:07.332 true 00:09:07.332 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:07.332 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.592 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.592 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:09:07.592 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:09:07.853 true 00:09:07.853 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:07.853 10:36:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.113 10:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.113 10:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:09:08.113 10:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:09:08.374 true 00:09:08.374 10:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:08.374 10:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.634 10:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.634 10:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:09:08.634 10:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:09:08.895 true 00:09:08.895 10:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:08.895 10:36:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.155 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.415 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:09:09.416 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:09:09.416 true 00:09:09.416 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:09.416 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.676 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.936 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:09:09.936 10:36:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:09:09.936 true 00:09:09.936 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:09.936 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.200 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.463 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:09:10.463 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:09:10.463 true 00:09:10.463 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:10.463 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.724 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.985 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:09:10.985 10:36:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:09:10.985 true 00:09:10.985 10:36:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:10.985 10:36:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.245 10:36:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.506 10:36:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:09:11.506 10:36:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:09:11.506 true 00:09:11.766 10:36:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:11.766 10:36:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.766 10:36:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.766 Initializing NVMe Controllers 00:09:11.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:11.766 Controller IO queue size 128, less than required. 00:09:11.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:11.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:11.766 Initialization complete. Launching workers. 00:09:11.766 ======================================================== 00:09:11.766 Latency(us) 00:09:11.766 Device Information : IOPS MiB/s Average min max 00:09:11.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31230.84 15.25 4098.39 1115.60 7922.07 00:09:11.766 ======================================================== 00:09:11.766 Total : 31230.84 15.25 4098.39 1115.60 7922.07 00:09:11.766 00:09:12.026 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:09:12.026 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:09:12.026 true 00:09:12.287 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 808073 00:09:12.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (808073) - No such process 00:09:12.287 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 808073 00:09:12.287 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.287 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:12.547 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:12.547 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:12.547 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:12.547 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:12.547 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:12.547 null0 00:09:12.807 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:12.807 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:12.807 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:12.807 null1 00:09:12.807 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:12.807 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:12.807 10:36:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:13.067 null2 00:09:13.067 10:36:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.067 10:36:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.067 10:36:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:13.327 null3 00:09:13.327 10:36:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.327 10:36:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.327 10:36:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:13.327 null4 00:09:13.327 10:36:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.327 10:36:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.327 10:36:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:13.588 null5 00:09:13.588 10:36:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.588 10:36:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.588 10:36:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:13.848 null6 00:09:13.848 10:36:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.848 10:36:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.848 10:36:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:13.848 null7 00:09:13.848 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.848 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.848 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:13.848 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.848 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.848 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:13.848 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.848 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:13.848 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.848 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.848 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.848 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 814618 814619 814621 814623 814625 814627 814629 814630 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.849 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.109 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:14.109 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:14.110 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.110 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:14.110 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:14.110 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:14.110 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:14.110 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:14.369 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:14.370 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:14.630 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:14.630 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:14.630 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:14.630 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.630 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:14.630 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.630 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:14.630 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.630 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:14.630 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:14.630 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.630 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.631 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:14.631 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.631 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.631 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.631 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.631 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:14.631 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:14.891 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.891 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.891 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:14.891 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:14.891 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.891 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.891 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:14.891 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.891 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.891 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.891 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:14.891 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.891 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.892 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:14.892 10:36:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:14.892 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.892 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:14.892 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.892 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.892 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:14.892 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.154 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.154 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.154 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.154 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.154 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.154 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.154 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.154 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.154 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.154 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.154 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.154 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.154 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.154 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.155 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.155 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.155 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.155 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.155 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.155 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.155 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.155 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.155 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.155 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.155 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.417 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.678 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.678 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.678 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.678 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.678 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.678 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.678 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.678 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.678 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.678 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.678 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.678 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.678 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.678 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.679 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.679 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:15.679 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.679 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.939 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.939 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.939 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.939 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.939 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.939 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.939 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.939 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.939 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.939 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.939 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.939 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.940 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.940 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.940 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.940 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.940 10:36:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.940 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.940 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.940 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.940 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.940 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.940 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.940 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.940 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.940 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.940 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.940 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.940 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.200 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.200 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.200 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.200 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.200 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.200 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.200 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.200 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.200 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.200 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.200 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.200 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.200 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.200 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.200 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.200 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.200 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.461 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.720 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.720 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.720 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.720 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.720 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.720 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.720 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.720 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.720 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.720 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:16.720 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.720 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.720 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.720 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.720 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.720 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.721 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.983 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.983 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.983 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.983 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.983 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.983 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.983 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.983 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.983 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.983 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.983 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.983 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.983 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.983 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.983 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.983 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.983 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.983 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.983 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.983 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.983 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.983 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.983 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.983 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.244 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.505 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.765 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.765 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.765 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.765 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.765 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.765 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.765 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.765 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.765 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.765 10:36:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:18.025 rmmod nvme_tcp 00:09:18.025 rmmod nvme_fabrics 00:09:18.025 rmmod nvme_keyring 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 807437 ']' 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 807437 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 807437 ']' 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 807437 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 807437 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 807437' 00:09:18.025 killing process with pid 807437 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 807437 00:09:18.025 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 807437 00:09:18.285 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:18.285 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:18.285 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:18.285 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:09:18.285 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:09:18.285 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:18.285 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:09:18.285 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:18.285 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:18.285 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.285 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.286 10:36:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.200 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:20.200 00:09:20.200 real 0m49.128s 00:09:20.200 user 3m20.062s 00:09:20.200 sys 0m17.373s 00:09:20.200 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.200 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.200 ************************************ 00:09:20.200 END TEST nvmf_ns_hotplug_stress 00:09:20.200 ************************************ 00:09:20.200 10:36:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:20.200 10:36:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:20.200 10:36:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.201 10:36:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:20.463 ************************************ 00:09:20.463 START TEST nvmf_delete_subsystem 00:09:20.463 ************************************ 00:09:20.463 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:20.463 * Looking for test storage... 00:09:20.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.463 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:20.463 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:20.463 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:20.463 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:20.463 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.463 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.463 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.463 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.463 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.463 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.463 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.463 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.463 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:20.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.464 --rc genhtml_branch_coverage=1 00:09:20.464 --rc genhtml_function_coverage=1 00:09:20.464 --rc genhtml_legend=1 00:09:20.464 --rc geninfo_all_blocks=1 00:09:20.464 --rc geninfo_unexecuted_blocks=1 00:09:20.464 00:09:20.464 ' 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:20.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.464 --rc genhtml_branch_coverage=1 00:09:20.464 --rc genhtml_function_coverage=1 00:09:20.464 --rc genhtml_legend=1 00:09:20.464 --rc geninfo_all_blocks=1 00:09:20.464 --rc geninfo_unexecuted_blocks=1 00:09:20.464 00:09:20.464 ' 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:20.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.464 --rc genhtml_branch_coverage=1 00:09:20.464 --rc genhtml_function_coverage=1 00:09:20.464 --rc genhtml_legend=1 00:09:20.464 --rc geninfo_all_blocks=1 00:09:20.464 --rc geninfo_unexecuted_blocks=1 00:09:20.464 00:09:20.464 ' 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:20.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.464 --rc genhtml_branch_coverage=1 00:09:20.464 --rc genhtml_function_coverage=1 00:09:20.464 --rc genhtml_legend=1 00:09:20.464 --rc geninfo_all_blocks=1 00:09:20.464 --rc geninfo_unexecuted_blocks=1 00:09:20.464 00:09:20.464 ' 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.464 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.465 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.465 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.465 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:20.465 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.465 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:09:20.465 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:20.465 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:20.465 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.465 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.465 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.465 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:20.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:20.465 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:20.465 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:20.465 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:20.726 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:20.726 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:20.726 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.726 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:20.726 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:20.726 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:20.726 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.726 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.726 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.726 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:20.726 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:20.726 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:20.726 10:36:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:29.122 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.122 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:29.123 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:29.123 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:29.123 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:29.123 10:37:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:29.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:09:29.123 00:09:29.123 --- 10.0.0.2 ping statistics --- 00:09:29.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.123 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:09:29.123 00:09:29.123 --- 10.0.0.1 ping statistics --- 00:09:29.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.123 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=819902 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 819902 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 819902 ']' 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.123 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.124 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.124 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.124 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.124 [2024-11-19 10:37:07.191239] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:09:29.124 [2024-11-19 10:37:07.191302] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.124 [2024-11-19 10:37:07.289109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:29.124 [2024-11-19 10:37:07.341260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.124 [2024-11-19 10:37:07.341315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.124 [2024-11-19 10:37:07.341325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.124 [2024-11-19 10:37:07.341332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.124 [2024-11-19 10:37:07.341338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.124 [2024-11-19 10:37:07.343124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.124 [2024-11-19 10:37:07.343128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.124 [2024-11-19 10:37:08.074285] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.124 [2024-11-19 10:37:08.098601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.124 NULL1 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.124 Delay0 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=820176 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:29.124 10:37:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:29.124 [2024-11-19 10:37:08.235739] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:31.040 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:31.040 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.040 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 starting I/O failed: -6 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 starting I/O failed: -6 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 starting I/O failed: -6 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 starting I/O failed: -6 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 starting I/O failed: -6 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 starting I/O failed: -6 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 starting I/O failed: -6 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 starting I/O failed: -6 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 starting I/O failed: -6 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 starting I/O failed: -6 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 starting I/O failed: -6 00:09:31.303 [2024-11-19 10:37:10.450410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced680 is same with the state(6) to be set 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 starting I/O failed: -6 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 starting I/O failed: -6 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 starting I/O failed: -6 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 starting I/O failed: -6 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 starting I/O failed: -6 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.303 Write completed with error (sct=0, sc=8) 00:09:31.303 starting I/O failed: -6 00:09:31.303 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 starting I/O failed: -6 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 starting I/O failed: -6 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 starting I/O failed: -6 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 starting I/O failed: -6 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 starting I/O failed: -6 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 [2024-11-19 10:37:10.456678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2b2c00d490 is same with the state(6) to be set 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Write completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:31.304 Read completed with error (sct=0, sc=8) 00:09:32.248 [2024-11-19 10:37:11.419347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee9a0 is same with the state(6) to be set 00:09:32.509 Write completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Write completed with error (sct=0, sc=8) 00:09:32.509 Write completed with error (sct=0, sc=8) 00:09:32.509 Write completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Write completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Write completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 [2024-11-19 10:37:11.453755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Write completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Write completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Write completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Write completed with error (sct=0, sc=8) 00:09:32.509 Write completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Write completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 [2024-11-19 10:37:11.454140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced4a0 is same with the state(6) to be set 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Read completed with error (sct=0, sc=8) 00:09:32.509 Write completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Write completed with error (sct=0, sc=8) 00:09:32.510 Write completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Write completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Write completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Write completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Write completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 [2024-11-19 10:37:11.459434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2b2c00d020 is same with the state(6) to be set 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Write completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Write completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Write completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Write completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Write completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Read completed with error (sct=0, sc=8) 00:09:32.510 Write completed with error (sct=0, sc=8) 00:09:32.510 [2024-11-19 10:37:11.459520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2b2c00d7c0 is same with the state(6) to be set 00:09:32.510 Initializing NVMe Controllers 00:09:32.510 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:32.510 Controller IO queue size 128, less than required. 00:09:32.510 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:32.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:32.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:32.510 Initialization complete. Launching workers. 00:09:32.510 ======================================================== 00:09:32.510 Latency(us) 00:09:32.510 Device Information : IOPS MiB/s Average min max 00:09:32.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.28 0.08 909015.64 323.92 1006855.53 00:09:32.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.77 0.08 906463.22 305.85 1013770.70 00:09:32.510 ======================================================== 00:09:32.510 Total : 329.05 0.16 907729.77 305.85 1013770.70 00:09:32.510 00:09:32.510 [2024-11-19 10:37:11.460084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcee9a0 (9): Bad file descriptor 00:09:32.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:32.510 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.510 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:32.510 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 820176 00:09:32.510 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:33.081 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:33.081 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 820176 00:09:33.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (820176) - No such process 00:09:33.081 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 820176 00:09:33.081 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:09:33.081 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 820176 00:09:33.081 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:09:33.081 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.081 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:09:33.081 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.081 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 820176 00:09:33.081 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:09:33.081 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.081 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:33.081 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.081 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:33.081 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.082 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.082 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.082 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.082 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.082 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.082 [2024-11-19 10:37:11.991056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.082 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.082 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.082 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.082 10:37:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.082 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.082 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=820857 00:09:33.082 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:33.082 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:33.082 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 820857 00:09:33.082 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:33.082 [2024-11-19 10:37:12.096881] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:33.342 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:33.342 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 820857 00:09:33.342 10:37:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:33.912 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:33.912 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 820857 00:09:33.912 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:34.484 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:34.484 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 820857 00:09:34.484 10:37:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:35.054 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:35.054 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 820857 00:09:35.054 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:35.625 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:35.625 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 820857 00:09:35.625 10:37:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:35.885 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:35.885 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 820857 00:09:35.885 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:36.145 Initializing NVMe Controllers 00:09:36.145 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:36.145 Controller IO queue size 128, less than required. 00:09:36.145 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:36.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:36.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:36.145 Initialization complete. Launching workers. 00:09:36.145 ======================================================== 00:09:36.145 Latency(us) 00:09:36.145 Device Information : IOPS MiB/s Average min max 00:09:36.145 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001762.22 1000130.62 1004547.50 00:09:36.145 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003004.55 1000426.91 1041338.13 00:09:36.145 ======================================================== 00:09:36.145 Total : 256.00 0.12 1002383.38 1000130.62 1041338.13 00:09:36.145 00:09:36.405 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:36.405 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 820857 00:09:36.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (820857) - No such process 00:09:36.405 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 820857 00:09:36.405 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:36.405 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:36.405 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:36.405 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:09:36.405 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:36.405 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:09:36.405 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:36.405 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:36.405 rmmod nvme_tcp 00:09:36.405 rmmod nvme_fabrics 00:09:36.405 rmmod nvme_keyring 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 819902 ']' 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 819902 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 819902 ']' 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 819902 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 819902 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 819902' 00:09:36.666 killing process with pid 819902 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 819902 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 819902 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.666 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.667 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.212 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:39.212 00:09:39.212 real 0m18.444s 00:09:39.212 user 0m31.176s 00:09:39.212 sys 0m6.856s 00:09:39.212 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.212 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:39.212 ************************************ 00:09:39.212 END TEST nvmf_delete_subsystem 00:09:39.212 ************************************ 00:09:39.212 10:37:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:39.212 10:37:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:39.212 10:37:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.212 10:37:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:39.212 ************************************ 00:09:39.212 START TEST nvmf_host_management 00:09:39.212 ************************************ 00:09:39.212 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:39.212 * Looking for test storage... 00:09:39.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:39.212 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:39.212 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:09:39.212 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:39.212 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:39.212 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.212 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.212 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.212 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.212 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.212 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.212 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.212 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.212 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.212 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.212 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:39.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.213 --rc genhtml_branch_coverage=1 00:09:39.213 --rc genhtml_function_coverage=1 00:09:39.213 --rc genhtml_legend=1 00:09:39.213 --rc geninfo_all_blocks=1 00:09:39.213 --rc geninfo_unexecuted_blocks=1 00:09:39.213 00:09:39.213 ' 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:39.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.213 --rc genhtml_branch_coverage=1 00:09:39.213 --rc genhtml_function_coverage=1 00:09:39.213 --rc genhtml_legend=1 00:09:39.213 --rc geninfo_all_blocks=1 00:09:39.213 --rc geninfo_unexecuted_blocks=1 00:09:39.213 00:09:39.213 ' 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:39.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.213 --rc genhtml_branch_coverage=1 00:09:39.213 --rc genhtml_function_coverage=1 00:09:39.213 --rc genhtml_legend=1 00:09:39.213 --rc geninfo_all_blocks=1 00:09:39.213 --rc geninfo_unexecuted_blocks=1 00:09:39.213 00:09:39.213 ' 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:39.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.213 --rc genhtml_branch_coverage=1 00:09:39.213 --rc genhtml_function_coverage=1 00:09:39.213 --rc genhtml_legend=1 00:09:39.213 --rc geninfo_all_blocks=1 00:09:39.213 --rc geninfo_unexecuted_blocks=1 00:09:39.213 00:09:39.213 ' 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:39.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:09:39.213 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:47.361 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:47.361 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:47.362 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:47.362 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:47.362 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:47.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:09:47.362 00:09:47.362 --- 10.0.0.2 ping statistics --- 00:09:47.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.362 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:09:47.362 00:09:47.362 --- 10.0.0.1 ping statistics --- 00:09:47.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.362 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=825881 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 825881 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 825881 ']' 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.362 10:37:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:47.362 [2024-11-19 10:37:25.758143] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:09:47.362 [2024-11-19 10:37:25.758216] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.362 [2024-11-19 10:37:25.857731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.362 [2024-11-19 10:37:25.910882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.362 [2024-11-19 10:37:25.910933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.362 [2024-11-19 10:37:25.910942] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.362 [2024-11-19 10:37:25.910953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.362 [2024-11-19 10:37:25.910959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.362 [2024-11-19 10:37:25.912951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.362 [2024-11-19 10:37:25.913112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.362 [2024-11-19 10:37:25.913271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:47.362 [2024-11-19 10:37:25.913412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:47.624 [2024-11-19 10:37:26.629404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:47.624 Malloc0 00:09:47.624 [2024-11-19 10:37:26.708799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=826268 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 826268 /var/tmp/bdevperf.sock 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 826268 ']' 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:47.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:47.624 { 00:09:47.624 "params": { 00:09:47.624 "name": "Nvme$subsystem", 00:09:47.624 "trtype": "$TEST_TRANSPORT", 00:09:47.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:47.624 "adrfam": "ipv4", 00:09:47.624 "trsvcid": "$NVMF_PORT", 00:09:47.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:47.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:47.624 "hdgst": ${hdgst:-false}, 00:09:47.624 "ddgst": ${ddgst:-false} 00:09:47.624 }, 00:09:47.624 "method": "bdev_nvme_attach_controller" 00:09:47.624 } 00:09:47.624 EOF 00:09:47.624 )") 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:47.624 10:37:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:47.624 "params": { 00:09:47.624 "name": "Nvme0", 00:09:47.624 "trtype": "tcp", 00:09:47.624 "traddr": "10.0.0.2", 00:09:47.624 "adrfam": "ipv4", 00:09:47.624 "trsvcid": "4420", 00:09:47.624 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:47.624 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:47.624 "hdgst": false, 00:09:47.624 "ddgst": false 00:09:47.624 }, 00:09:47.624 "method": "bdev_nvme_attach_controller" 00:09:47.624 }' 00:09:47.624 [2024-11-19 10:37:26.818467] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:09:47.624 [2024-11-19 10:37:26.818528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid826268 ] 00:09:47.885 [2024-11-19 10:37:26.911264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.885 [2024-11-19 10:37:26.963743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.146 Running I/O for 10 seconds... 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=724 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 724 -ge 100 ']' 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.719 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:48.719 [2024-11-19 10:37:27.716590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.719 [2024-11-19 10:37:27.716665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.719 [2024-11-19 10:37:27.716675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.719 [2024-11-19 10:37:27.716683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.719 [2024-11-19 10:37:27.716691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.719 [2024-11-19 10:37:27.716699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.719 [2024-11-19 10:37:27.716706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.719 [2024-11-19 10:37:27.716713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.719 [2024-11-19 10:37:27.716720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.719 [2024-11-19 10:37:27.716727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.719 [2024-11-19 10:37:27.716735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.719 [2024-11-19 10:37:27.716743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.719 [2024-11-19 10:37:27.716750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.719 [2024-11-19 10:37:27.716757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.719 [2024-11-19 10:37:27.716774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.716994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.717000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.717008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.717015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.717021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.717029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e130 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.719984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:48.720 [2024-11-19 10:37:27.720049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.720 [2024-11-19 10:37:27.720062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:48.720 [2024-11-19 10:37:27.720072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.720 [2024-11-19 10:37:27.720082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:48.720 [2024-11-19 10:37:27.720090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.720 [2024-11-19 10:37:27.720099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:48.720 [2024-11-19 10:37:27.720107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.720 [2024-11-19 10:37:27.720115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db000 is same with the state(6) to be set 00:09:48.720 [2024-11-19 10:37:27.720586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.720 [2024-11-19 10:37:27.720609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.720 [2024-11-19 10:37:27.720627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.720 [2024-11-19 10:37:27.720638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.720 [2024-11-19 10:37:27.720649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.720 [2024-11-19 10:37:27.720657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.720 [2024-11-19 10:37:27.720668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.720 [2024-11-19 10:37:27.720676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.720 [2024-11-19 10:37:27.720687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.720 [2024-11-19 10:37:27.720707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.720 [2024-11-19 10:37:27.720718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.720 [2024-11-19 10:37:27.720726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.720 [2024-11-19 10:37:27.720736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.720 [2024-11-19 10:37:27.720746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.720 [2024-11-19 10:37:27.720756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.720 [2024-11-19 10:37:27.720764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.720 [2024-11-19 10:37:27.720774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.720 [2024-11-19 10:37:27.720782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.720 [2024-11-19 10:37:27.720792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.720 [2024-11-19 10:37:27.720801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.720 [2024-11-19 10:37:27.720811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.720 [2024-11-19 10:37:27.720819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.720 [2024-11-19 10:37:27.720828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.720 [2024-11-19 10:37:27.720835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.720 [2024-11-19 10:37:27.720846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.720855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.720865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.720873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.720882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.720890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.720901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.720909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.720919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.720927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.720940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.720948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.720960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.720967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.720977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.720984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.720994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.721 [2024-11-19 10:37:27.721564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.721 [2024-11-19 10:37:27.721572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.722 [2024-11-19 10:37:27.721581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.722 [2024-11-19 10:37:27.721588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.722 [2024-11-19 10:37:27.721598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.722 [2024-11-19 10:37:27.721606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.722 [2024-11-19 10:37:27.721616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.722 [2024-11-19 10:37:27.721624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.722 [2024-11-19 10:37:27.721634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.722 [2024-11-19 10:37:27.721641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.722 [2024-11-19 10:37:27.721650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.722 [2024-11-19 10:37:27.721661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.722 [2024-11-19 10:37:27.721671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.722 [2024-11-19 10:37:27.721679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.722 [2024-11-19 10:37:27.721694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.722 [2024-11-19 10:37:27.721703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.722 [2024-11-19 10:37:27.721713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.722 [2024-11-19 10:37:27.721720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.722 [2024-11-19 10:37:27.721730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.722 [2024-11-19 10:37:27.721737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.722 [2024-11-19 10:37:27.721747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.722 [2024-11-19 10:37:27.721754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.722 [2024-11-19 10:37:27.721765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.722 [2024-11-19 10:37:27.721773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.722 [2024-11-19 10:37:27.721782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len: 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.722 128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.722 [2024-11-19 10:37:27.721803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.722 [2024-11-19 10:37:27.721814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.722 [2024-11-19 10:37:27.721822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.722 [2024-11-19 10:37:27.721832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.722 [2024-11-19 10:37:27.721841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.722 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:48.722 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.722 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:48.722 [2024-11-19 10:37:27.723195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:48.722 task offset: 106496 on job bdev=Nvme0n1 fails 00:09:48.722 00:09:48.722 Latency(us) 00:09:48.722 [2024-11-19T09:37:27.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.722 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:48.722 Job: Nvme0n1 ended in about 0.54 seconds with error 00:09:48.722 Verification LBA range: start 0x0 length 0x400 00:09:48.722 Nvme0n1 : 0.54 1448.98 90.56 117.68 0.00 39815.42 1733.97 36044.80 00:09:48.722 [2024-11-19T09:37:27.917Z] =================================================================================================================== 00:09:48.722 [2024-11-19T09:37:27.917Z] Total : 1448.98 90.56 117.68 0.00 39815.42 1733.97 36044.80 00:09:48.722 [2024-11-19 10:37:27.725431] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:48.722 [2024-11-19 10:37:27.725473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23db000 (9): Bad file descriptor 00:09:48.722 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.722 10:37:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:48.722 [2024-11-19 10:37:27.778393] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:09:49.663 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 826268 00:09:49.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (826268) - No such process 00:09:49.663 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:49.663 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:49.663 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:49.663 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:49.663 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:49.663 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:49.663 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:49.663 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:49.663 { 00:09:49.663 "params": { 00:09:49.663 "name": "Nvme$subsystem", 00:09:49.663 "trtype": "$TEST_TRANSPORT", 00:09:49.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:49.663 "adrfam": "ipv4", 00:09:49.663 "trsvcid": "$NVMF_PORT", 00:09:49.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:49.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:49.663 "hdgst": ${hdgst:-false}, 00:09:49.663 "ddgst": ${ddgst:-false} 00:09:49.663 }, 00:09:49.663 "method": "bdev_nvme_attach_controller" 00:09:49.663 } 00:09:49.663 EOF 00:09:49.663 )") 00:09:49.663 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:49.663 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:49.663 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:49.663 10:37:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:49.663 "params": { 00:09:49.663 "name": "Nvme0", 00:09:49.663 "trtype": "tcp", 00:09:49.663 "traddr": "10.0.0.2", 00:09:49.663 "adrfam": "ipv4", 00:09:49.663 "trsvcid": "4420", 00:09:49.663 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:49.663 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:49.663 "hdgst": false, 00:09:49.663 "ddgst": false 00:09:49.663 }, 00:09:49.663 "method": "bdev_nvme_attach_controller" 00:09:49.663 }' 00:09:49.663 [2024-11-19 10:37:28.794079] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:09:49.663 [2024-11-19 10:37:28.794131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid826620 ] 00:09:49.923 [2024-11-19 10:37:28.882360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.923 [2024-11-19 10:37:28.917357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.923 Running I/O for 1 seconds... 00:09:51.304 1745.00 IOPS, 109.06 MiB/s 00:09:51.304 Latency(us) 00:09:51.304 [2024-11-19T09:37:30.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.304 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:51.304 Verification LBA range: start 0x0 length 0x400 00:09:51.304 Nvme0n1 : 1.01 1790.02 111.88 0.00 0.00 35047.70 2498.56 32986.45 00:09:51.304 [2024-11-19T09:37:30.499Z] =================================================================================================================== 00:09:51.304 [2024-11-19T09:37:30.499Z] Total : 1790.02 111.88 0.00 0.00 35047.70 2498.56 32986.45 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:51.304 rmmod nvme_tcp 00:09:51.304 rmmod nvme_fabrics 00:09:51.304 rmmod nvme_keyring 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 825881 ']' 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 825881 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 825881 ']' 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 825881 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 825881 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 825881' 00:09:51.304 killing process with pid 825881 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 825881 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 825881 00:09:51.304 [2024-11-19 10:37:30.417056] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.304 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:53.846 00:09:53.846 real 0m14.571s 00:09:53.846 user 0m22.676s 00:09:53.846 sys 0m6.861s 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:53.846 ************************************ 00:09:53.846 END TEST nvmf_host_management 00:09:53.846 ************************************ 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.846 ************************************ 00:09:53.846 START TEST nvmf_lvol 00:09:53.846 ************************************ 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:53.846 * Looking for test storage... 00:09:53.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.846 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:53.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.847 --rc genhtml_branch_coverage=1 00:09:53.847 --rc genhtml_function_coverage=1 00:09:53.847 --rc genhtml_legend=1 00:09:53.847 --rc geninfo_all_blocks=1 00:09:53.847 --rc geninfo_unexecuted_blocks=1 00:09:53.847 00:09:53.847 ' 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:53.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.847 --rc genhtml_branch_coverage=1 00:09:53.847 --rc genhtml_function_coverage=1 00:09:53.847 --rc genhtml_legend=1 00:09:53.847 --rc geninfo_all_blocks=1 00:09:53.847 --rc geninfo_unexecuted_blocks=1 00:09:53.847 00:09:53.847 ' 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:53.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.847 --rc genhtml_branch_coverage=1 00:09:53.847 --rc genhtml_function_coverage=1 00:09:53.847 --rc genhtml_legend=1 00:09:53.847 --rc geninfo_all_blocks=1 00:09:53.847 --rc geninfo_unexecuted_blocks=1 00:09:53.847 00:09:53.847 ' 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:53.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.847 --rc genhtml_branch_coverage=1 00:09:53.847 --rc genhtml_function_coverage=1 00:09:53.847 --rc genhtml_legend=1 00:09:53.847 --rc geninfo_all_blocks=1 00:09:53.847 --rc geninfo_unexecuted_blocks=1 00:09:53.847 00:09:53.847 ' 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:09:53.847 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:01.992 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:01.993 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:01.993 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:01.993 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:01.993 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:01.993 10:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.993 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:01.993 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:01.993 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:01.993 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:01.993 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:01.993 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:01.993 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:01.993 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:01.993 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:01.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:10:01.993 00:10:01.993 --- 10.0.0.2 ping statistics --- 00:10:01.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.993 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:10:01.993 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:01.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:10:01.993 00:10:01.993 --- 10.0.0.1 ping statistics --- 00:10:01.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.993 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:10:01.993 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.993 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:10:01.993 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:01.993 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.993 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:01.994 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:01.994 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.994 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:01.994 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:01.994 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:01.994 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:01.994 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:01.994 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:01.994 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=831177 00:10:01.994 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 831177 00:10:01.994 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:01.994 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 831177 ']' 00:10:01.994 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.994 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.994 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.994 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.994 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:01.994 [2024-11-19 10:37:40.398315] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:10:01.994 [2024-11-19 10:37:40.398379] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.994 [2024-11-19 10:37:40.496099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:01.994 [2024-11-19 10:37:40.548773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.994 [2024-11-19 10:37:40.548830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.994 [2024-11-19 10:37:40.548839] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.994 [2024-11-19 10:37:40.548846] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.994 [2024-11-19 10:37:40.548852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.994 [2024-11-19 10:37:40.550694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.994 [2024-11-19 10:37:40.550895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.994 [2024-11-19 10:37:40.550896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.255 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.255 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:10:02.255 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:02.255 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:02.255 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:02.255 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.255 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:02.255 [2024-11-19 10:37:41.442917] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.516 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.777 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:02.777 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.777 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:02.777 10:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:03.037 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:03.298 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6c0747db-1ee2-4111-aeea-d7564ed2596e 00:10:03.298 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6c0747db-1ee2-4111-aeea-d7564ed2596e lvol 20 00:10:03.559 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7cc4e3ca-418b-4871-93f8-5ec82b25e4a7 00:10:03.559 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:03.559 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7cc4e3ca-418b-4871-93f8-5ec82b25e4a7 00:10:03.820 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:04.080 [2024-11-19 10:37:43.061627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.080 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:04.080 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=831694 00:10:04.080 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:04.080 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:05.465 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7cc4e3ca-418b-4871-93f8-5ec82b25e4a7 MY_SNAPSHOT 00:10:05.465 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=26465557-23f4-4d40-bd39-5b9983f791cd 00:10:05.465 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7cc4e3ca-418b-4871-93f8-5ec82b25e4a7 30 00:10:05.465 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 26465557-23f4-4d40-bd39-5b9983f791cd MY_CLONE 00:10:05.725 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f9f93769-e3af-483b-ae2b-dcf2b59c12a0 00:10:05.725 10:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f9f93769-e3af-483b-ae2b-dcf2b59c12a0 00:10:05.985 10:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 831694 00:10:15.980 Initializing NVMe Controllers 00:10:15.980 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:15.980 Controller IO queue size 128, less than required. 00:10:15.980 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:15.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:15.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:15.980 Initialization complete. Launching workers. 00:10:15.980 ======================================================== 00:10:15.980 Latency(us) 00:10:15.980 Device Information : IOPS MiB/s Average min max 00:10:15.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17493.91 68.34 7319.51 1694.47 43851.27 00:10:15.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15823.41 61.81 8090.61 3888.58 39276.42 00:10:15.980 ======================================================== 00:10:15.980 Total : 33317.32 130.15 7685.73 1694.47 43851.27 00:10:15.980 00:10:15.980 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:15.980 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7cc4e3ca-418b-4871-93f8-5ec82b25e4a7 00:10:15.980 10:37:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6c0747db-1ee2-4111-aeea-d7564ed2596e 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:15.980 rmmod nvme_tcp 00:10:15.980 rmmod nvme_fabrics 00:10:15.980 rmmod nvme_keyring 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 831177 ']' 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 831177 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 831177 ']' 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 831177 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 831177 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 831177' 00:10:15.980 killing process with pid 831177 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 831177 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 831177 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.980 10:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.363 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:17.363 00:10:17.363 real 0m23.830s 00:10:17.363 user 1m4.465s 00:10:17.363 sys 0m8.639s 00:10:17.363 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.363 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:17.363 ************************************ 00:10:17.363 END TEST nvmf_lvol 00:10:17.363 ************************************ 00:10:17.363 10:37:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:17.363 10:37:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.363 10:37:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.363 10:37:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:17.363 ************************************ 00:10:17.363 START TEST nvmf_lvs_grow 00:10:17.363 ************************************ 00:10:17.363 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:17.624 * Looking for test storage... 00:10:17.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.624 --rc genhtml_branch_coverage=1 00:10:17.624 --rc genhtml_function_coverage=1 00:10:17.624 --rc genhtml_legend=1 00:10:17.624 --rc geninfo_all_blocks=1 00:10:17.624 --rc geninfo_unexecuted_blocks=1 00:10:17.624 00:10:17.624 ' 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.624 --rc genhtml_branch_coverage=1 00:10:17.624 --rc genhtml_function_coverage=1 00:10:17.624 --rc genhtml_legend=1 00:10:17.624 --rc geninfo_all_blocks=1 00:10:17.624 --rc geninfo_unexecuted_blocks=1 00:10:17.624 00:10:17.624 ' 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.624 --rc genhtml_branch_coverage=1 00:10:17.624 --rc genhtml_function_coverage=1 00:10:17.624 --rc genhtml_legend=1 00:10:17.624 --rc geninfo_all_blocks=1 00:10:17.624 --rc geninfo_unexecuted_blocks=1 00:10:17.624 00:10:17.624 ' 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.624 --rc genhtml_branch_coverage=1 00:10:17.624 --rc genhtml_function_coverage=1 00:10:17.624 --rc genhtml_legend=1 00:10:17.624 --rc geninfo_all_blocks=1 00:10:17.624 --rc geninfo_unexecuted_blocks=1 00:10:17.624 00:10:17.624 ' 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:17.624 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:10:17.625 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:25.762 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:25.762 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:25.762 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:25.762 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:25.763 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:25.763 10:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:25.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:10:25.763 00:10:25.763 --- 10.0.0.2 ping statistics --- 00:10:25.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.763 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:25.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:10:25.763 00:10:25.763 --- 10.0.0.1 ping statistics --- 00:10:25.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.763 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=838154 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 838154 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 838154 ']' 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.763 10:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:25.763 [2024-11-19 10:38:04.259853] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:10:25.763 [2024-11-19 10:38:04.259944] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.763 [2024-11-19 10:38:04.357463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.763 [2024-11-19 10:38:04.409505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.763 [2024-11-19 10:38:04.409558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.764 [2024-11-19 10:38:04.409567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.764 [2024-11-19 10:38:04.409575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.764 [2024-11-19 10:38:04.409582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.764 [2024-11-19 10:38:04.410373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.024 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.024 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:10:26.024 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:26.024 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:26.024 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:26.025 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.025 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:26.285 [2024-11-19 10:38:05.292484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.285 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:26.285 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:26.285 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.285 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:26.285 ************************************ 00:10:26.285 START TEST lvs_grow_clean 00:10:26.285 ************************************ 00:10:26.285 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:10:26.285 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:26.285 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:26.285 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:26.285 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:26.285 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:26.285 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:26.285 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:26.285 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:26.285 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:26.546 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:26.546 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:26.806 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2ed2f6ea-dcf5-43a5-833f-c307dd53e70a 00:10:26.806 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ed2f6ea-dcf5-43a5-833f-c307dd53e70a 00:10:26.806 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:26.806 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:26.806 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:26.806 10:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2ed2f6ea-dcf5-43a5-833f-c307dd53e70a lvol 150 00:10:27.066 10:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ffc1ef4e-0691-4d91-be60-17abac859dd9 00:10:27.066 10:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:27.066 10:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:27.326 [2024-11-19 10:38:06.329625] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:27.326 [2024-11-19 10:38:06.329699] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:27.326 true 00:10:27.326 10:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ed2f6ea-dcf5-43a5-833f-c307dd53e70a 00:10:27.326 10:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:27.587 10:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:27.587 10:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:27.587 10:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ffc1ef4e-0691-4d91-be60-17abac859dd9 00:10:27.847 10:38:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:28.107 [2024-11-19 10:38:07.055926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.107 10:38:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:28.107 10:38:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:28.107 10:38:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=838772 00:10:28.107 10:38:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:28.107 10:38:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 838772 /var/tmp/bdevperf.sock 00:10:28.107 10:38:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 838772 ']' 00:10:28.107 10:38:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:28.108 10:38:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.108 10:38:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:28.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:28.108 10:38:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.108 10:38:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:28.108 [2024-11-19 10:38:07.283581] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:10:28.108 [2024-11-19 10:38:07.283652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid838772 ] 00:10:28.368 [2024-11-19 10:38:07.377025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.368 [2024-11-19 10:38:07.429629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.939 10:38:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.939 10:38:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:10:28.939 10:38:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:29.199 Nvme0n1 00:10:29.199 10:38:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:29.460 [ 00:10:29.460 { 00:10:29.460 "name": "Nvme0n1", 00:10:29.460 "aliases": [ 00:10:29.460 "ffc1ef4e-0691-4d91-be60-17abac859dd9" 00:10:29.460 ], 00:10:29.460 "product_name": "NVMe disk", 00:10:29.460 "block_size": 4096, 00:10:29.461 "num_blocks": 38912, 00:10:29.461 "uuid": "ffc1ef4e-0691-4d91-be60-17abac859dd9", 00:10:29.461 "numa_id": 0, 00:10:29.461 "assigned_rate_limits": { 00:10:29.461 "rw_ios_per_sec": 0, 00:10:29.461 "rw_mbytes_per_sec": 0, 00:10:29.461 "r_mbytes_per_sec": 0, 00:10:29.461 "w_mbytes_per_sec": 0 00:10:29.461 }, 00:10:29.461 "claimed": false, 00:10:29.461 "zoned": false, 00:10:29.461 "supported_io_types": { 00:10:29.461 "read": true, 00:10:29.461 "write": true, 00:10:29.461 "unmap": true, 00:10:29.461 "flush": true, 00:10:29.461 "reset": true, 00:10:29.461 "nvme_admin": true, 00:10:29.461 "nvme_io": true, 00:10:29.461 "nvme_io_md": false, 00:10:29.461 "write_zeroes": true, 00:10:29.461 "zcopy": false, 00:10:29.461 "get_zone_info": false, 00:10:29.461 "zone_management": false, 00:10:29.461 "zone_append": false, 00:10:29.461 "compare": true, 00:10:29.461 "compare_and_write": true, 00:10:29.461 "abort": true, 00:10:29.461 "seek_hole": false, 00:10:29.461 "seek_data": false, 00:10:29.461 "copy": true, 00:10:29.461 "nvme_iov_md": false 00:10:29.461 }, 00:10:29.461 "memory_domains": [ 00:10:29.461 { 00:10:29.461 "dma_device_id": "system", 00:10:29.461 "dma_device_type": 1 00:10:29.461 } 00:10:29.461 ], 00:10:29.461 "driver_specific": { 00:10:29.461 "nvme": [ 00:10:29.461 { 00:10:29.461 "trid": { 00:10:29.461 "trtype": "TCP", 00:10:29.461 "adrfam": "IPv4", 00:10:29.461 "traddr": "10.0.0.2", 00:10:29.461 "trsvcid": "4420", 00:10:29.461 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:29.461 }, 00:10:29.461 "ctrlr_data": { 00:10:29.461 "cntlid": 1, 00:10:29.461 "vendor_id": "0x8086", 00:10:29.461 "model_number": "SPDK bdev Controller", 00:10:29.461 "serial_number": "SPDK0", 00:10:29.461 "firmware_revision": "25.01", 00:10:29.461 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:29.461 "oacs": { 00:10:29.461 "security": 0, 00:10:29.461 "format": 0, 00:10:29.461 "firmware": 0, 00:10:29.461 "ns_manage": 0 00:10:29.461 }, 00:10:29.461 "multi_ctrlr": true, 00:10:29.461 "ana_reporting": false 00:10:29.461 }, 00:10:29.461 "vs": { 00:10:29.461 "nvme_version": "1.3" 00:10:29.461 }, 00:10:29.461 "ns_data": { 00:10:29.461 "id": 1, 00:10:29.461 "can_share": true 00:10:29.461 } 00:10:29.461 } 00:10:29.461 ], 00:10:29.461 "mp_policy": "active_passive" 00:10:29.461 } 00:10:29.461 } 00:10:29.461 ] 00:10:29.461 10:38:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=839114 00:10:29.461 10:38:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:29.461 10:38:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:29.461 Running I/O for 10 seconds... 00:10:30.846 Latency(us) 00:10:30.846 [2024-11-19T09:38:10.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.846 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:30.846 Nvme0n1 : 1.00 23906.00 93.38 0.00 0.00 0.00 0.00 0.00 00:10:30.846 [2024-11-19T09:38:10.041Z] =================================================================================================================== 00:10:30.846 [2024-11-19T09:38:10.041Z] Total : 23906.00 93.38 0.00 0.00 0.00 0.00 0.00 00:10:30.846 00:10:31.417 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2ed2f6ea-dcf5-43a5-833f-c307dd53e70a 00:10:31.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.677 Nvme0n1 : 2.00 24025.00 93.85 0.00 0.00 0.00 0.00 0.00 00:10:31.677 [2024-11-19T09:38:10.872Z] =================================================================================================================== 00:10:31.677 [2024-11-19T09:38:10.872Z] Total : 24025.00 93.85 0.00 0.00 0.00 0.00 0.00 00:10:31.677 00:10:31.677 true 00:10:31.677 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ed2f6ea-dcf5-43a5-833f-c307dd53e70a 00:10:31.677 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:31.938 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:31.938 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:31.938 10:38:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 839114 00:10:32.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.510 Nvme0n1 : 3.00 24091.33 94.11 0.00 0.00 0.00 0.00 0.00 00:10:32.510 [2024-11-19T09:38:11.705Z] =================================================================================================================== 00:10:32.510 [2024-11-19T09:38:11.705Z] Total : 24091.33 94.11 0.00 0.00 0.00 0.00 0.00 00:10:32.510 00:10:33.893 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:33.893 Nvme0n1 : 4.00 24156.50 94.36 0.00 0.00 0.00 0.00 0.00 00:10:33.893 [2024-11-19T09:38:13.088Z] =================================================================================================================== 00:10:33.893 [2024-11-19T09:38:13.088Z] Total : 24156.50 94.36 0.00 0.00 0.00 0.00 0.00 00:10:33.893 00:10:34.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.837 Nvme0n1 : 5.00 24202.00 94.54 0.00 0.00 0.00 0.00 0.00 00:10:34.837 [2024-11-19T09:38:14.032Z] =================================================================================================================== 00:10:34.837 [2024-11-19T09:38:14.032Z] Total : 24202.00 94.54 0.00 0.00 0.00 0.00 0.00 00:10:34.837 00:10:35.780 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.780 Nvme0n1 : 6.00 24237.67 94.68 0.00 0.00 0.00 0.00 0.00 00:10:35.780 [2024-11-19T09:38:14.975Z] =================================================================================================================== 00:10:35.780 [2024-11-19T09:38:14.975Z] Total : 24237.67 94.68 0.00 0.00 0.00 0.00 0.00 00:10:35.780 00:10:36.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.723 Nvme0n1 : 7.00 24264.29 94.78 0.00 0.00 0.00 0.00 0.00 00:10:36.723 [2024-11-19T09:38:15.918Z] =================================================================================================================== 00:10:36.723 [2024-11-19T09:38:15.918Z] Total : 24264.29 94.78 0.00 0.00 0.00 0.00 0.00 00:10:36.723 00:10:37.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.666 Nvme0n1 : 8.00 24287.25 94.87 0.00 0.00 0.00 0.00 0.00 00:10:37.666 [2024-11-19T09:38:16.861Z] =================================================================================================================== 00:10:37.666 [2024-11-19T09:38:16.861Z] Total : 24287.25 94.87 0.00 0.00 0.00 0.00 0.00 00:10:37.666 00:10:38.606 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.606 Nvme0n1 : 9.00 24308.67 94.96 0.00 0.00 0.00 0.00 0.00 00:10:38.606 [2024-11-19T09:38:17.801Z] =================================================================================================================== 00:10:38.606 [2024-11-19T09:38:17.801Z] Total : 24308.67 94.96 0.00 0.00 0.00 0.00 0.00 00:10:38.606 00:10:39.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.546 Nvme0n1 : 10.00 24322.60 95.01 0.00 0.00 0.00 0.00 0.00 00:10:39.546 [2024-11-19T09:38:18.741Z] =================================================================================================================== 00:10:39.546 [2024-11-19T09:38:18.741Z] Total : 24322.60 95.01 0.00 0.00 0.00 0.00 0.00 00:10:39.546 00:10:39.546 00:10:39.546 Latency(us) 00:10:39.547 [2024-11-19T09:38:18.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.547 Nvme0n1 : 10.01 24323.02 95.01 0.00 0.00 5258.34 2471.25 10540.37 00:10:39.547 [2024-11-19T09:38:18.742Z] =================================================================================================================== 00:10:39.547 [2024-11-19T09:38:18.742Z] Total : 24323.02 95.01 0.00 0.00 5258.34 2471.25 10540.37 00:10:39.547 { 00:10:39.547 "results": [ 00:10:39.547 { 00:10:39.547 "job": "Nvme0n1", 00:10:39.547 "core_mask": "0x2", 00:10:39.547 "workload": "randwrite", 00:10:39.547 "status": "finished", 00:10:39.547 "queue_depth": 128, 00:10:39.547 "io_size": 4096, 00:10:39.547 "runtime": 10.005091, 00:10:39.547 "iops": 24323.01715196793, 00:10:39.547 "mibps": 95.01178574987473, 00:10:39.547 "io_failed": 0, 00:10:39.547 "io_timeout": 0, 00:10:39.547 "avg_latency_us": 5258.33648627103, 00:10:39.547 "min_latency_us": 2471.2533333333336, 00:10:39.547 "max_latency_us": 10540.373333333333 00:10:39.547 } 00:10:39.547 ], 00:10:39.547 "core_count": 1 00:10:39.547 } 00:10:39.547 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 838772 00:10:39.547 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 838772 ']' 00:10:39.547 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 838772 00:10:39.547 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:10:39.547 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.547 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 838772 00:10:39.807 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:39.807 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:39.807 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 838772' 00:10:39.807 killing process with pid 838772 00:10:39.807 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 838772 00:10:39.807 Received shutdown signal, test time was about 10.000000 seconds 00:10:39.807 00:10:39.807 Latency(us) 00:10:39.807 [2024-11-19T09:38:19.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.807 [2024-11-19T09:38:19.002Z] =================================================================================================================== 00:10:39.807 [2024-11-19T09:38:19.002Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:39.807 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 838772 00:10:39.807 10:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:40.067 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:40.067 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ed2f6ea-dcf5-43a5-833f-c307dd53e70a 00:10:40.067 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:40.328 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:40.328 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:40.328 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:40.589 [2024-11-19 10:38:19.580356] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:40.589 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ed2f6ea-dcf5-43a5-833f-c307dd53e70a 00:10:40.589 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:10:40.589 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ed2f6ea-dcf5-43a5-833f-c307dd53e70a 00:10:40.589 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:40.590 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:40.590 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:40.590 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:40.590 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:40.590 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:40.590 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:40.590 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:40.590 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ed2f6ea-dcf5-43a5-833f-c307dd53e70a 00:10:40.850 request: 00:10:40.850 { 00:10:40.850 "uuid": "2ed2f6ea-dcf5-43a5-833f-c307dd53e70a", 00:10:40.850 "method": "bdev_lvol_get_lvstores", 00:10:40.850 "req_id": 1 00:10:40.850 } 00:10:40.850 Got JSON-RPC error response 00:10:40.850 response: 00:10:40.850 { 00:10:40.850 "code": -19, 00:10:40.850 "message": "No such device" 00:10:40.850 } 00:10:40.850 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:10:40.851 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:40.851 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:40.851 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:40.851 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:40.851 aio_bdev 00:10:40.851 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ffc1ef4e-0691-4d91-be60-17abac859dd9 00:10:40.851 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=ffc1ef4e-0691-4d91-be60-17abac859dd9 00:10:40.851 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.851 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:10:40.851 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.851 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.851 10:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:41.117 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ffc1ef4e-0691-4d91-be60-17abac859dd9 -t 2000 00:10:41.117 [ 00:10:41.117 { 00:10:41.117 "name": "ffc1ef4e-0691-4d91-be60-17abac859dd9", 00:10:41.117 "aliases": [ 00:10:41.117 "lvs/lvol" 00:10:41.117 ], 00:10:41.117 "product_name": "Logical Volume", 00:10:41.117 "block_size": 4096, 00:10:41.117 "num_blocks": 38912, 00:10:41.117 "uuid": "ffc1ef4e-0691-4d91-be60-17abac859dd9", 00:10:41.117 "assigned_rate_limits": { 00:10:41.117 "rw_ios_per_sec": 0, 00:10:41.117 "rw_mbytes_per_sec": 0, 00:10:41.117 "r_mbytes_per_sec": 0, 00:10:41.117 "w_mbytes_per_sec": 0 00:10:41.117 }, 00:10:41.117 "claimed": false, 00:10:41.117 "zoned": false, 00:10:41.117 "supported_io_types": { 00:10:41.117 "read": true, 00:10:41.117 "write": true, 00:10:41.117 "unmap": true, 00:10:41.117 "flush": false, 00:10:41.117 "reset": true, 00:10:41.117 "nvme_admin": false, 00:10:41.117 "nvme_io": false, 00:10:41.117 "nvme_io_md": false, 00:10:41.117 "write_zeroes": true, 00:10:41.117 "zcopy": false, 00:10:41.117 "get_zone_info": false, 00:10:41.117 "zone_management": false, 00:10:41.117 "zone_append": false, 00:10:41.117 "compare": false, 00:10:41.117 "compare_and_write": false, 00:10:41.117 "abort": false, 00:10:41.117 "seek_hole": true, 00:10:41.117 "seek_data": true, 00:10:41.117 "copy": false, 00:10:41.117 "nvme_iov_md": false 00:10:41.117 }, 00:10:41.117 "driver_specific": { 00:10:41.117 "lvol": { 00:10:41.117 "lvol_store_uuid": "2ed2f6ea-dcf5-43a5-833f-c307dd53e70a", 00:10:41.117 "base_bdev": "aio_bdev", 00:10:41.117 "thin_provision": false, 00:10:41.117 "num_allocated_clusters": 38, 00:10:41.117 "snapshot": false, 00:10:41.117 "clone": false, 00:10:41.117 "esnap_clone": false 00:10:41.117 } 00:10:41.117 } 00:10:41.117 } 00:10:41.117 ] 00:10:41.378 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:10:41.378 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ed2f6ea-dcf5-43a5-833f-c307dd53e70a 00:10:41.378 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:41.378 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:41.378 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ed2f6ea-dcf5-43a5-833f-c307dd53e70a 00:10:41.378 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:41.640 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:41.640 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ffc1ef4e-0691-4d91-be60-17abac859dd9 00:10:41.640 10:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2ed2f6ea-dcf5-43a5-833f-c307dd53e70a 00:10:41.902 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:42.164 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:42.164 00:10:42.164 real 0m15.831s 00:10:42.164 user 0m15.469s 00:10:42.164 sys 0m1.485s 00:10:42.164 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.164 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:42.164 ************************************ 00:10:42.164 END TEST lvs_grow_clean 00:10:42.164 ************************************ 00:10:42.164 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:42.164 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:42.164 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.164 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:42.164 ************************************ 00:10:42.164 START TEST lvs_grow_dirty 00:10:42.164 ************************************ 00:10:42.164 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:10:42.164 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:42.164 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:42.164 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:42.164 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:42.164 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:42.164 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:42.164 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:42.164 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:42.164 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:42.426 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:42.426 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:42.687 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0fbc4819-b33d-4486-80a2-f4aa0fe45237 00:10:42.687 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fbc4819-b33d-4486-80a2-f4aa0fe45237 00:10:42.687 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:42.687 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:42.687 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:42.687 10:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0fbc4819-b33d-4486-80a2-f4aa0fe45237 lvol 150 00:10:42.948 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7d38f3ac-26ff-495a-990e-527ed59b87ad 00:10:42.948 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:42.949 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:43.209 [2024-11-19 10:38:22.148634] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:43.209 [2024-11-19 10:38:22.148681] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:43.209 true 00:10:43.209 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fbc4819-b33d-4486-80a2-f4aa0fe45237 00:10:43.209 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:43.209 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:43.209 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:43.469 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7d38f3ac-26ff-495a-990e-527ed59b87ad 00:10:43.469 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:43.730 [2024-11-19 10:38:22.774423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:43.730 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:43.991 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=841938 00:10:43.991 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:43.991 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:43.991 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 841938 /var/tmp/bdevperf.sock 00:10:43.991 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 841938 ']' 00:10:43.991 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:43.991 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.991 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:43.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:43.991 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.991 10:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:43.991 [2024-11-19 10:38:23.009040] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:10:43.991 [2024-11-19 10:38:23.009093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid841938 ] 00:10:43.991 [2024-11-19 10:38:23.090831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.991 [2024-11-19 10:38:23.120586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.936 10:38:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.936 10:38:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:44.936 10:38:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:44.936 Nvme0n1 00:10:44.936 10:38:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:45.197 [ 00:10:45.197 { 00:10:45.197 "name": "Nvme0n1", 00:10:45.197 "aliases": [ 00:10:45.197 "7d38f3ac-26ff-495a-990e-527ed59b87ad" 00:10:45.197 ], 00:10:45.197 "product_name": "NVMe disk", 00:10:45.197 "block_size": 4096, 00:10:45.197 "num_blocks": 38912, 00:10:45.197 "uuid": "7d38f3ac-26ff-495a-990e-527ed59b87ad", 00:10:45.197 "numa_id": 0, 00:10:45.197 "assigned_rate_limits": { 00:10:45.197 "rw_ios_per_sec": 0, 00:10:45.197 "rw_mbytes_per_sec": 0, 00:10:45.197 "r_mbytes_per_sec": 0, 00:10:45.197 "w_mbytes_per_sec": 0 00:10:45.197 }, 00:10:45.197 "claimed": false, 00:10:45.197 "zoned": false, 00:10:45.197 "supported_io_types": { 00:10:45.197 "read": true, 00:10:45.197 "write": true, 00:10:45.197 "unmap": true, 00:10:45.197 "flush": true, 00:10:45.197 "reset": true, 00:10:45.197 "nvme_admin": true, 00:10:45.197 "nvme_io": true, 00:10:45.197 "nvme_io_md": false, 00:10:45.197 "write_zeroes": true, 00:10:45.197 "zcopy": false, 00:10:45.197 "get_zone_info": false, 00:10:45.197 "zone_management": false, 00:10:45.197 "zone_append": false, 00:10:45.197 "compare": true, 00:10:45.197 "compare_and_write": true, 00:10:45.197 "abort": true, 00:10:45.197 "seek_hole": false, 00:10:45.197 "seek_data": false, 00:10:45.197 "copy": true, 00:10:45.197 "nvme_iov_md": false 00:10:45.197 }, 00:10:45.197 "memory_domains": [ 00:10:45.197 { 00:10:45.197 "dma_device_id": "system", 00:10:45.197 "dma_device_type": 1 00:10:45.197 } 00:10:45.197 ], 00:10:45.197 "driver_specific": { 00:10:45.197 "nvme": [ 00:10:45.197 { 00:10:45.197 "trid": { 00:10:45.197 "trtype": "TCP", 00:10:45.197 "adrfam": "IPv4", 00:10:45.197 "traddr": "10.0.0.2", 00:10:45.197 "trsvcid": "4420", 00:10:45.197 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:45.197 }, 00:10:45.197 "ctrlr_data": { 00:10:45.197 "cntlid": 1, 00:10:45.197 "vendor_id": "0x8086", 00:10:45.197 "model_number": "SPDK bdev Controller", 00:10:45.197 "serial_number": "SPDK0", 00:10:45.197 "firmware_revision": "25.01", 00:10:45.197 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:45.197 "oacs": { 00:10:45.197 "security": 0, 00:10:45.197 "format": 0, 00:10:45.197 "firmware": 0, 00:10:45.197 "ns_manage": 0 00:10:45.197 }, 00:10:45.197 "multi_ctrlr": true, 00:10:45.197 "ana_reporting": false 00:10:45.197 }, 00:10:45.197 "vs": { 00:10:45.197 "nvme_version": "1.3" 00:10:45.197 }, 00:10:45.197 "ns_data": { 00:10:45.197 "id": 1, 00:10:45.197 "can_share": true 00:10:45.197 } 00:10:45.197 } 00:10:45.197 ], 00:10:45.197 "mp_policy": "active_passive" 00:10:45.197 } 00:10:45.197 } 00:10:45.197 ] 00:10:45.197 10:38:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=842207 00:10:45.197 10:38:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:45.197 10:38:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:45.197 Running I/O for 10 seconds... 00:10:46.137 Latency(us) 00:10:46.137 [2024-11-19T09:38:25.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:46.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:46.137 Nvme0n1 : 1.00 25110.00 98.09 0.00 0.00 0.00 0.00 0.00 00:10:46.137 [2024-11-19T09:38:25.332Z] =================================================================================================================== 00:10:46.137 [2024-11-19T09:38:25.332Z] Total : 25110.00 98.09 0.00 0.00 0.00 0.00 0.00 00:10:46.137 00:10:47.078 10:38:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0fbc4819-b33d-4486-80a2-f4aa0fe45237 00:10:47.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:47.339 Nvme0n1 : 2.00 25323.00 98.92 0.00 0.00 0.00 0.00 0.00 00:10:47.339 [2024-11-19T09:38:26.534Z] =================================================================================================================== 00:10:47.339 [2024-11-19T09:38:26.534Z] Total : 25323.00 98.92 0.00 0.00 0.00 0.00 0.00 00:10:47.339 00:10:47.339 true 00:10:47.339 10:38:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fbc4819-b33d-4486-80a2-f4aa0fe45237 00:10:47.339 10:38:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:47.600 10:38:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:47.600 10:38:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:47.600 10:38:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 842207 00:10:48.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:48.172 Nvme0n1 : 3.00 25393.67 99.19 0.00 0.00 0.00 0.00 0.00 00:10:48.172 [2024-11-19T09:38:27.367Z] =================================================================================================================== 00:10:48.172 [2024-11-19T09:38:27.367Z] Total : 25393.67 99.19 0.00 0.00 0.00 0.00 0.00 00:10:48.172 00:10:49.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:49.132 Nvme0n1 : 4.00 25445.25 99.40 0.00 0.00 0.00 0.00 0.00 00:10:49.132 [2024-11-19T09:38:28.327Z] =================================================================================================================== 00:10:49.132 [2024-11-19T09:38:28.327Z] Total : 25445.25 99.40 0.00 0.00 0.00 0.00 0.00 00:10:49.132 00:10:50.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:50.515 Nvme0n1 : 5.00 25489.00 99.57 0.00 0.00 0.00 0.00 0.00 00:10:50.515 [2024-11-19T09:38:29.710Z] =================================================================================================================== 00:10:50.515 [2024-11-19T09:38:29.710Z] Total : 25489.00 99.57 0.00 0.00 0.00 0.00 0.00 00:10:50.515 00:10:51.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:51.457 Nvme0n1 : 6.00 25507.33 99.64 0.00 0.00 0.00 0.00 0.00 00:10:51.457 [2024-11-19T09:38:30.652Z] =================================================================================================================== 00:10:51.457 [2024-11-19T09:38:30.652Z] Total : 25507.33 99.64 0.00 0.00 0.00 0.00 0.00 00:10:51.457 00:10:52.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.400 Nvme0n1 : 7.00 25529.00 99.72 0.00 0.00 0.00 0.00 0.00 00:10:52.400 [2024-11-19T09:38:31.595Z] =================================================================================================================== 00:10:52.400 [2024-11-19T09:38:31.595Z] Total : 25529.00 99.72 0.00 0.00 0.00 0.00 0.00 00:10:52.400 00:10:53.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:53.342 Nvme0n1 : 8.00 25547.88 99.80 0.00 0.00 0.00 0.00 0.00 00:10:53.342 [2024-11-19T09:38:32.537Z] =================================================================================================================== 00:10:53.342 [2024-11-19T09:38:32.537Z] Total : 25547.88 99.80 0.00 0.00 0.00 0.00 0.00 00:10:53.342 00:10:54.282 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:54.283 Nvme0n1 : 9.00 25565.78 99.87 0.00 0.00 0.00 0.00 0.00 00:10:54.283 [2024-11-19T09:38:33.478Z] =================================================================================================================== 00:10:54.283 [2024-11-19T09:38:33.478Z] Total : 25565.78 99.87 0.00 0.00 0.00 0.00 0.00 00:10:54.283 00:10:55.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:55.224 Nvme0n1 : 10.00 25577.40 99.91 0.00 0.00 0.00 0.00 0.00 00:10:55.224 [2024-11-19T09:38:34.419Z] =================================================================================================================== 00:10:55.224 [2024-11-19T09:38:34.419Z] Total : 25577.40 99.91 0.00 0.00 0.00 0.00 0.00 00:10:55.224 00:10:55.224 00:10:55.224 Latency(us) 00:10:55.224 [2024-11-19T09:38:34.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:55.224 Nvme0n1 : 10.00 25580.09 99.92 0.00 0.00 5000.94 3126.61 14090.24 00:10:55.224 [2024-11-19T09:38:34.419Z] =================================================================================================================== 00:10:55.224 [2024-11-19T09:38:34.419Z] Total : 25580.09 99.92 0.00 0.00 5000.94 3126.61 14090.24 00:10:55.224 { 00:10:55.224 "results": [ 00:10:55.224 { 00:10:55.224 "job": "Nvme0n1", 00:10:55.224 "core_mask": "0x2", 00:10:55.224 "workload": "randwrite", 00:10:55.224 "status": "finished", 00:10:55.224 "queue_depth": 128, 00:10:55.224 "io_size": 4096, 00:10:55.224 "runtime": 10.003286, 00:10:55.224 "iops": 25580.094380986407, 00:10:55.224 "mibps": 99.92224367572815, 00:10:55.224 "io_failed": 0, 00:10:55.224 "io_timeout": 0, 00:10:55.224 "avg_latency_us": 5000.944064508145, 00:10:55.224 "min_latency_us": 3126.6133333333332, 00:10:55.224 "max_latency_us": 14090.24 00:10:55.224 } 00:10:55.224 ], 00:10:55.224 "core_count": 1 00:10:55.224 } 00:10:55.224 10:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 841938 00:10:55.224 10:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 841938 ']' 00:10:55.224 10:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 841938 00:10:55.224 10:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:55.224 10:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.224 10:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 841938 00:10:55.484 10:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:55.485 10:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:55.485 10:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 841938' 00:10:55.485 killing process with pid 841938 00:10:55.485 10:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 841938 00:10:55.485 Received shutdown signal, test time was about 10.000000 seconds 00:10:55.485 00:10:55.485 Latency(us) 00:10:55.485 [2024-11-19T09:38:34.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.485 [2024-11-19T09:38:34.680Z] =================================================================================================================== 00:10:55.485 [2024-11-19T09:38:34.680Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:55.485 10:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 841938 00:10:55.485 10:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:55.745 10:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:55.745 10:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fbc4819-b33d-4486-80a2-f4aa0fe45237 00:10:55.745 10:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:56.007 10:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:56.007 10:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:56.007 10:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 838154 00:10:56.007 10:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 838154 00:10:56.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 838154 Killed "${NVMF_APP[@]}" "$@" 00:10:56.007 10:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:56.007 10:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:56.007 10:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:56.007 10:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:56.007 10:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:56.007 10:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=844473 00:10:56.007 10:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 844473 00:10:56.007 10:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:56.007 10:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 844473 ']' 00:10:56.007 10:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.007 10:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.007 10:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.007 10:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.007 10:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:56.267 [2024-11-19 10:38:35.227818] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:10:56.267 [2024-11-19 10:38:35.227900] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.267 [2024-11-19 10:38:35.322122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.267 [2024-11-19 10:38:35.352463] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.267 [2024-11-19 10:38:35.352492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.267 [2024-11-19 10:38:35.352498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.267 [2024-11-19 10:38:35.352503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.267 [2024-11-19 10:38:35.352507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.267 [2024-11-19 10:38:35.352953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.839 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.839 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:56.839 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:56.839 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:56.839 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:57.099 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.099 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:57.099 [2024-11-19 10:38:36.206424] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:57.099 [2024-11-19 10:38:36.206498] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:57.099 [2024-11-19 10:38:36.206520] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:57.099 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:57.099 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7d38f3ac-26ff-495a-990e-527ed59b87ad 00:10:57.099 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7d38f3ac-26ff-495a-990e-527ed59b87ad 00:10:57.099 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.099 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:57.099 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.099 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.099 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:57.360 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7d38f3ac-26ff-495a-990e-527ed59b87ad -t 2000 00:10:57.621 [ 00:10:57.621 { 00:10:57.621 "name": "7d38f3ac-26ff-495a-990e-527ed59b87ad", 00:10:57.621 "aliases": [ 00:10:57.621 "lvs/lvol" 00:10:57.621 ], 00:10:57.621 "product_name": "Logical Volume", 00:10:57.621 "block_size": 4096, 00:10:57.621 "num_blocks": 38912, 00:10:57.621 "uuid": "7d38f3ac-26ff-495a-990e-527ed59b87ad", 00:10:57.621 "assigned_rate_limits": { 00:10:57.621 "rw_ios_per_sec": 0, 00:10:57.621 "rw_mbytes_per_sec": 0, 00:10:57.621 "r_mbytes_per_sec": 0, 00:10:57.621 "w_mbytes_per_sec": 0 00:10:57.621 }, 00:10:57.621 "claimed": false, 00:10:57.621 "zoned": false, 00:10:57.621 "supported_io_types": { 00:10:57.621 "read": true, 00:10:57.621 "write": true, 00:10:57.621 "unmap": true, 00:10:57.621 "flush": false, 00:10:57.621 "reset": true, 00:10:57.621 "nvme_admin": false, 00:10:57.621 "nvme_io": false, 00:10:57.621 "nvme_io_md": false, 00:10:57.621 "write_zeroes": true, 00:10:57.621 "zcopy": false, 00:10:57.621 "get_zone_info": false, 00:10:57.621 "zone_management": false, 00:10:57.621 "zone_append": false, 00:10:57.621 "compare": false, 00:10:57.621 "compare_and_write": false, 00:10:57.621 "abort": false, 00:10:57.621 "seek_hole": true, 00:10:57.621 "seek_data": true, 00:10:57.621 "copy": false, 00:10:57.621 "nvme_iov_md": false 00:10:57.621 }, 00:10:57.621 "driver_specific": { 00:10:57.621 "lvol": { 00:10:57.621 "lvol_store_uuid": "0fbc4819-b33d-4486-80a2-f4aa0fe45237", 00:10:57.621 "base_bdev": "aio_bdev", 00:10:57.621 "thin_provision": false, 00:10:57.621 "num_allocated_clusters": 38, 00:10:57.621 "snapshot": false, 00:10:57.621 "clone": false, 00:10:57.621 "esnap_clone": false 00:10:57.621 } 00:10:57.621 } 00:10:57.621 } 00:10:57.621 ] 00:10:57.621 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:57.621 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fbc4819-b33d-4486-80a2-f4aa0fe45237 00:10:57.621 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:57.621 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:57.621 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fbc4819-b33d-4486-80a2-f4aa0fe45237 00:10:57.621 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:57.881 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:57.881 10:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:57.881 [2024-11-19 10:38:37.051088] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:58.142 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fbc4819-b33d-4486-80a2-f4aa0fe45237 00:10:58.142 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:58.142 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fbc4819-b33d-4486-80a2-f4aa0fe45237 00:10:58.142 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:58.142 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:58.142 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:58.142 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:58.142 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:58.142 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:58.142 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:58.142 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:58.142 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fbc4819-b33d-4486-80a2-f4aa0fe45237 00:10:58.142 request: 00:10:58.142 { 00:10:58.142 "uuid": "0fbc4819-b33d-4486-80a2-f4aa0fe45237", 00:10:58.142 "method": "bdev_lvol_get_lvstores", 00:10:58.142 "req_id": 1 00:10:58.142 } 00:10:58.142 Got JSON-RPC error response 00:10:58.142 response: 00:10:58.142 { 00:10:58.142 "code": -19, 00:10:58.142 "message": "No such device" 00:10:58.142 } 00:10:58.142 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:58.142 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:58.142 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:58.142 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:58.142 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:58.402 aio_bdev 00:10:58.402 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7d38f3ac-26ff-495a-990e-527ed59b87ad 00:10:58.402 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7d38f3ac-26ff-495a-990e-527ed59b87ad 00:10:58.402 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.402 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:58.402 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.402 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.402 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:58.663 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7d38f3ac-26ff-495a-990e-527ed59b87ad -t 2000 00:10:58.663 [ 00:10:58.663 { 00:10:58.663 "name": "7d38f3ac-26ff-495a-990e-527ed59b87ad", 00:10:58.663 "aliases": [ 00:10:58.663 "lvs/lvol" 00:10:58.663 ], 00:10:58.663 "product_name": "Logical Volume", 00:10:58.663 "block_size": 4096, 00:10:58.663 "num_blocks": 38912, 00:10:58.663 "uuid": "7d38f3ac-26ff-495a-990e-527ed59b87ad", 00:10:58.663 "assigned_rate_limits": { 00:10:58.663 "rw_ios_per_sec": 0, 00:10:58.663 "rw_mbytes_per_sec": 0, 00:10:58.663 "r_mbytes_per_sec": 0, 00:10:58.663 "w_mbytes_per_sec": 0 00:10:58.663 }, 00:10:58.663 "claimed": false, 00:10:58.663 "zoned": false, 00:10:58.663 "supported_io_types": { 00:10:58.663 "read": true, 00:10:58.663 "write": true, 00:10:58.663 "unmap": true, 00:10:58.663 "flush": false, 00:10:58.663 "reset": true, 00:10:58.663 "nvme_admin": false, 00:10:58.663 "nvme_io": false, 00:10:58.663 "nvme_io_md": false, 00:10:58.663 "write_zeroes": true, 00:10:58.663 "zcopy": false, 00:10:58.663 "get_zone_info": false, 00:10:58.663 "zone_management": false, 00:10:58.663 "zone_append": false, 00:10:58.663 "compare": false, 00:10:58.663 "compare_and_write": false, 00:10:58.663 "abort": false, 00:10:58.664 "seek_hole": true, 00:10:58.664 "seek_data": true, 00:10:58.664 "copy": false, 00:10:58.664 "nvme_iov_md": false 00:10:58.664 }, 00:10:58.664 "driver_specific": { 00:10:58.664 "lvol": { 00:10:58.664 "lvol_store_uuid": "0fbc4819-b33d-4486-80a2-f4aa0fe45237", 00:10:58.664 "base_bdev": "aio_bdev", 00:10:58.664 "thin_provision": false, 00:10:58.664 "num_allocated_clusters": 38, 00:10:58.664 "snapshot": false, 00:10:58.664 "clone": false, 00:10:58.664 "esnap_clone": false 00:10:58.664 } 00:10:58.664 } 00:10:58.664 } 00:10:58.664 ] 00:10:58.664 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:58.664 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fbc4819-b33d-4486-80a2-f4aa0fe45237 00:10:58.664 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:58.923 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:58.923 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fbc4819-b33d-4486-80a2-f4aa0fe45237 00:10:58.923 10:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:58.923 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:58.923 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7d38f3ac-26ff-495a-990e-527ed59b87ad 00:10:59.183 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0fbc4819-b33d-4486-80a2-f4aa0fe45237 00:10:59.444 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:59.705 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:59.705 00:10:59.705 real 0m17.438s 00:10:59.705 user 0m45.459s 00:10:59.705 sys 0m3.138s 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:59.706 ************************************ 00:10:59.706 END TEST lvs_grow_dirty 00:10:59.706 ************************************ 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:59.706 nvmf_trace.0 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:59.706 rmmod nvme_tcp 00:10:59.706 rmmod nvme_fabrics 00:10:59.706 rmmod nvme_keyring 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 844473 ']' 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 844473 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 844473 ']' 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 844473 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.706 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 844473 00:10:59.967 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:59.967 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:59.967 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 844473' 00:10:59.967 killing process with pid 844473 00:10:59.967 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 844473 00:10:59.967 10:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 844473 00:10:59.967 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:59.967 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:59.967 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:59.967 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:59.967 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:59.967 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:59.967 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:59.967 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:59.967 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:59.967 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.967 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.967 10:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:02.514 00:11:02.514 real 0m44.615s 00:11:02.514 user 1m7.362s 00:11:02.514 sys 0m10.684s 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:02.514 ************************************ 00:11:02.514 END TEST nvmf_lvs_grow 00:11:02.514 ************************************ 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:02.514 ************************************ 00:11:02.514 START TEST nvmf_bdev_io_wait 00:11:02.514 ************************************ 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:02.514 * Looking for test storage... 00:11:02.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:02.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.514 --rc genhtml_branch_coverage=1 00:11:02.514 --rc genhtml_function_coverage=1 00:11:02.514 --rc genhtml_legend=1 00:11:02.514 --rc geninfo_all_blocks=1 00:11:02.514 --rc geninfo_unexecuted_blocks=1 00:11:02.514 00:11:02.514 ' 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:02.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.514 --rc genhtml_branch_coverage=1 00:11:02.514 --rc genhtml_function_coverage=1 00:11:02.514 --rc genhtml_legend=1 00:11:02.514 --rc geninfo_all_blocks=1 00:11:02.514 --rc geninfo_unexecuted_blocks=1 00:11:02.514 00:11:02.514 ' 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:02.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.514 --rc genhtml_branch_coverage=1 00:11:02.514 --rc genhtml_function_coverage=1 00:11:02.514 --rc genhtml_legend=1 00:11:02.514 --rc geninfo_all_blocks=1 00:11:02.514 --rc geninfo_unexecuted_blocks=1 00:11:02.514 00:11:02.514 ' 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:02.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.514 --rc genhtml_branch_coverage=1 00:11:02.514 --rc genhtml_function_coverage=1 00:11:02.514 --rc genhtml_legend=1 00:11:02.514 --rc geninfo_all_blocks=1 00:11:02.514 --rc geninfo_unexecuted_blocks=1 00:11:02.514 00:11:02.514 ' 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.514 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:11:02.515 10:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.654 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:10.655 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:10.655 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:10.655 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:10.655 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:10.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:11:10.655 00:11:10.655 --- 10.0.0.2 ping statistics --- 00:11:10.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.655 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:10.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:11:10.655 00:11:10.655 --- 10.0.0.1 ping statistics --- 00:11:10.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.655 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=849464 00:11:10.655 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 849464 00:11:10.656 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:10.656 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 849464 ']' 00:11:10.656 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.656 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.656 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.656 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.656 10:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.656 [2024-11-19 10:38:48.972858] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:11:10.656 [2024-11-19 10:38:48.972926] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.656 [2024-11-19 10:38:49.072059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.656 [2024-11-19 10:38:49.127027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.656 [2024-11-19 10:38:49.127080] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.656 [2024-11-19 10:38:49.127089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.656 [2024-11-19 10:38:49.127097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.656 [2024-11-19 10:38:49.127104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.656 [2024-11-19 10:38:49.129155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.656 [2024-11-19 10:38:49.129318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.656 [2024-11-19 10:38:49.129560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.656 [2024-11-19 10:38:49.129562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.656 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.656 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:11:10.656 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:10.656 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:10.656 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.656 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.656 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:10.656 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.656 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.656 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.656 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:10.656 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.656 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.918 [2024-11-19 10:38:49.912483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.918 Malloc0 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.918 [2024-11-19 10:38:49.977822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=849668 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=849670 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:10.918 { 00:11:10.918 "params": { 00:11:10.918 "name": "Nvme$subsystem", 00:11:10.918 "trtype": "$TEST_TRANSPORT", 00:11:10.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:10.918 "adrfam": "ipv4", 00:11:10.918 "trsvcid": "$NVMF_PORT", 00:11:10.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:10.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:10.918 "hdgst": ${hdgst:-false}, 00:11:10.918 "ddgst": ${ddgst:-false} 00:11:10.918 }, 00:11:10.918 "method": "bdev_nvme_attach_controller" 00:11:10.918 } 00:11:10.918 EOF 00:11:10.918 )") 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=849672 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=849675 00:11:10.918 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:10.918 { 00:11:10.918 "params": { 00:11:10.918 "name": "Nvme$subsystem", 00:11:10.918 "trtype": "$TEST_TRANSPORT", 00:11:10.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:10.918 "adrfam": "ipv4", 00:11:10.918 "trsvcid": "$NVMF_PORT", 00:11:10.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:10.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:10.919 "hdgst": ${hdgst:-false}, 00:11:10.919 "ddgst": ${ddgst:-false} 00:11:10.919 }, 00:11:10.919 "method": "bdev_nvme_attach_controller" 00:11:10.919 } 00:11:10.919 EOF 00:11:10.919 )") 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:10.919 { 00:11:10.919 "params": { 00:11:10.919 "name": "Nvme$subsystem", 00:11:10.919 "trtype": "$TEST_TRANSPORT", 00:11:10.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:10.919 "adrfam": "ipv4", 00:11:10.919 "trsvcid": "$NVMF_PORT", 00:11:10.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:10.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:10.919 "hdgst": ${hdgst:-false}, 00:11:10.919 "ddgst": ${ddgst:-false} 00:11:10.919 }, 00:11:10.919 "method": "bdev_nvme_attach_controller" 00:11:10.919 } 00:11:10.919 EOF 00:11:10.919 )") 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:10.919 { 00:11:10.919 "params": { 00:11:10.919 "name": "Nvme$subsystem", 00:11:10.919 "trtype": "$TEST_TRANSPORT", 00:11:10.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:10.919 "adrfam": "ipv4", 00:11:10.919 "trsvcid": "$NVMF_PORT", 00:11:10.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:10.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:10.919 "hdgst": ${hdgst:-false}, 00:11:10.919 "ddgst": ${ddgst:-false} 00:11:10.919 }, 00:11:10.919 "method": "bdev_nvme_attach_controller" 00:11:10.919 } 00:11:10.919 EOF 00:11:10.919 )") 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 849668 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:10.919 10:38:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:10.919 "params": { 00:11:10.919 "name": "Nvme1", 00:11:10.919 "trtype": "tcp", 00:11:10.919 "traddr": "10.0.0.2", 00:11:10.919 "adrfam": "ipv4", 00:11:10.919 "trsvcid": "4420", 00:11:10.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:10.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:10.919 "hdgst": false, 00:11:10.919 "ddgst": false 00:11:10.919 }, 00:11:10.919 "method": "bdev_nvme_attach_controller" 00:11:10.919 }' 00:11:10.919 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:10.919 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:10.919 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:10.919 "params": { 00:11:10.919 "name": "Nvme1", 00:11:10.919 "trtype": "tcp", 00:11:10.919 "traddr": "10.0.0.2", 00:11:10.919 "adrfam": "ipv4", 00:11:10.919 "trsvcid": "4420", 00:11:10.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:10.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:10.919 "hdgst": false, 00:11:10.919 "ddgst": false 00:11:10.919 }, 00:11:10.919 "method": "bdev_nvme_attach_controller" 00:11:10.919 }' 00:11:10.919 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:10.919 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:10.919 "params": { 00:11:10.919 "name": "Nvme1", 00:11:10.919 "trtype": "tcp", 00:11:10.919 "traddr": "10.0.0.2", 00:11:10.919 "adrfam": "ipv4", 00:11:10.919 "trsvcid": "4420", 00:11:10.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:10.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:10.919 "hdgst": false, 00:11:10.919 "ddgst": false 00:11:10.919 }, 00:11:10.919 "method": "bdev_nvme_attach_controller" 00:11:10.919 }' 00:11:10.919 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:10.919 10:38:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:10.919 "params": { 00:11:10.919 "name": "Nvme1", 00:11:10.919 "trtype": "tcp", 00:11:10.919 "traddr": "10.0.0.2", 00:11:10.919 "adrfam": "ipv4", 00:11:10.919 "trsvcid": "4420", 00:11:10.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:10.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:10.919 "hdgst": false, 00:11:10.919 "ddgst": false 00:11:10.919 }, 00:11:10.919 "method": "bdev_nvme_attach_controller" 00:11:10.919 }' 00:11:10.919 [2024-11-19 10:38:50.037120] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:11:10.919 [2024-11-19 10:38:50.037206] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:10.919 [2024-11-19 10:38:50.040526] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:11:10.919 [2024-11-19 10:38:50.040605] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:10.919 [2024-11-19 10:38:50.042726] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:11:10.919 [2024-11-19 10:38:50.042788] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:10.919 [2024-11-19 10:38:50.043064] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:11:10.919 [2024-11-19 10:38:50.043124] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:11.182 [2024-11-19 10:38:50.263015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.182 [2024-11-19 10:38:50.306193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:11.182 [2024-11-19 10:38:50.334156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.182 [2024-11-19 10:38:50.373437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:11.443 [2024-11-19 10:38:50.405901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.443 [2024-11-19 10:38:50.445618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:11:11.443 [2024-11-19 10:38:50.498067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.443 [2024-11-19 10:38:50.538584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:11.706 Running I/O for 1 seconds... 00:11:11.706 Running I/O for 1 seconds... 00:11:11.706 Running I/O for 1 seconds... 00:11:11.706 Running I/O for 1 seconds... 00:11:12.656 188616.00 IOPS, 736.78 MiB/s 00:11:12.656 Latency(us) 00:11:12.656 [2024-11-19T09:38:51.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:12.656 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:12.656 Nvme1n1 : 1.00 188243.37 735.33 0.00 0.00 675.83 300.37 1966.08 00:11:12.656 [2024-11-19T09:38:51.851Z] =================================================================================================================== 00:11:12.656 [2024-11-19T09:38:51.851Z] Total : 188243.37 735.33 0.00 0.00 675.83 300.37 1966.08 00:11:12.656 12144.00 IOPS, 47.44 MiB/s 00:11:12.656 Latency(us) 00:11:12.656 [2024-11-19T09:38:51.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:12.656 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:12.656 Nvme1n1 : 1.01 12203.65 47.67 0.00 0.00 10452.44 5079.04 15182.51 00:11:12.656 [2024-11-19T09:38:51.851Z] =================================================================================================================== 00:11:12.656 [2024-11-19T09:38:51.851Z] Total : 12203.65 47.67 0.00 0.00 10452.44 5079.04 15182.51 00:11:12.656 9749.00 IOPS, 38.08 MiB/s 00:11:12.656 Latency(us) 00:11:12.656 [2024-11-19T09:38:51.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:12.656 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:12.656 Nvme1n1 : 1.01 9821.06 38.36 0.00 0.00 12984.33 4423.68 19551.57 00:11:12.656 [2024-11-19T09:38:51.851Z] =================================================================================================================== 00:11:12.656 [2024-11-19T09:38:51.851Z] Total : 9821.06 38.36 0.00 0.00 12984.33 4423.68 19551.57 00:11:12.916 8874.00 IOPS, 34.66 MiB/s 00:11:12.916 Latency(us) 00:11:12.916 [2024-11-19T09:38:52.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:12.917 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:12.917 Nvme1n1 : 1.01 8944.91 34.94 0.00 0.00 14256.98 5215.57 25777.49 00:11:12.917 [2024-11-19T09:38:52.112Z] =================================================================================================================== 00:11:12.917 [2024-11-19T09:38:52.112Z] Total : 8944.91 34.94 0.00 0.00 14256.98 5215.57 25777.49 00:11:12.917 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 849670 00:11:12.917 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 849672 00:11:12.917 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 849675 00:11:12.917 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.917 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.917 10:38:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:12.917 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.917 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:12.917 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:12.917 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:12.917 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:12.917 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:12.917 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:12.917 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:12.917 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:12.917 rmmod nvme_tcp 00:11:12.917 rmmod nvme_fabrics 00:11:12.917 rmmod nvme_keyring 00:11:12.917 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:12.917 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:12.917 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:12.917 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 849464 ']' 00:11:12.917 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 849464 00:11:12.917 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 849464 ']' 00:11:12.917 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 849464 00:11:12.917 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:11:12.917 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.917 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 849464 00:11:13.178 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:13.178 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:13.178 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 849464' 00:11:13.178 killing process with pid 849464 00:11:13.178 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 849464 00:11:13.178 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 849464 00:11:13.178 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:13.178 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:13.178 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:13.178 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:13.178 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:11:13.178 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:11:13.178 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:13.178 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:13.178 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:13.178 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.178 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.178 10:38:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.725 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:15.725 00:11:15.725 real 0m13.182s 00:11:15.725 user 0m20.274s 00:11:15.725 sys 0m7.541s 00:11:15.725 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.725 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:15.725 ************************************ 00:11:15.725 END TEST nvmf_bdev_io_wait 00:11:15.726 ************************************ 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:15.726 ************************************ 00:11:15.726 START TEST nvmf_queue_depth 00:11:15.726 ************************************ 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:15.726 * Looking for test storage... 00:11:15.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:15.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.726 --rc genhtml_branch_coverage=1 00:11:15.726 --rc genhtml_function_coverage=1 00:11:15.726 --rc genhtml_legend=1 00:11:15.726 --rc geninfo_all_blocks=1 00:11:15.726 --rc geninfo_unexecuted_blocks=1 00:11:15.726 00:11:15.726 ' 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:15.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.726 --rc genhtml_branch_coverage=1 00:11:15.726 --rc genhtml_function_coverage=1 00:11:15.726 --rc genhtml_legend=1 00:11:15.726 --rc geninfo_all_blocks=1 00:11:15.726 --rc geninfo_unexecuted_blocks=1 00:11:15.726 00:11:15.726 ' 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:15.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.726 --rc genhtml_branch_coverage=1 00:11:15.726 --rc genhtml_function_coverage=1 00:11:15.726 --rc genhtml_legend=1 00:11:15.726 --rc geninfo_all_blocks=1 00:11:15.726 --rc geninfo_unexecuted_blocks=1 00:11:15.726 00:11:15.726 ' 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:15.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.726 --rc genhtml_branch_coverage=1 00:11:15.726 --rc genhtml_function_coverage=1 00:11:15.726 --rc genhtml_legend=1 00:11:15.726 --rc geninfo_all_blocks=1 00:11:15.726 --rc geninfo_unexecuted_blocks=1 00:11:15.726 00:11:15.726 ' 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:15.726 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:15.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:11:15.727 10:38:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:23.875 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:23.875 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:23.875 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:23.875 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.875 10:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.875 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.875 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:23.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:11:23.876 00:11:23.876 --- 10.0.0.2 ping statistics --- 00:11:23.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.876 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:11:23.876 00:11:23.876 --- 10.0.0.1 ping statistics --- 00:11:23.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.876 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=854482 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 854482 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 854482 ']' 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.876 10:39:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:23.876 [2024-11-19 10:39:02.334039] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:11:23.876 [2024-11-19 10:39:02.334103] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.876 [2024-11-19 10:39:02.434449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.876 [2024-11-19 10:39:02.484707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.876 [2024-11-19 10:39:02.484764] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.876 [2024-11-19 10:39:02.484774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.876 [2024-11-19 10:39:02.484781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.876 [2024-11-19 10:39:02.484787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.876 [2024-11-19 10:39:02.485592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:24.138 [2024-11-19 10:39:03.216880] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:24.138 Malloc0 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:24.138 [2024-11-19 10:39:03.278269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=854804 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 854804 /var/tmp/bdevperf.sock 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 854804 ']' 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:24.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.138 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:24.399 [2024-11-19 10:39:03.337515] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:11:24.399 [2024-11-19 10:39:03.337585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid854804 ] 00:11:24.399 [2024-11-19 10:39:03.416951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.399 [2024-11-19 10:39:03.477266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.399 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.399 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:11:24.399 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:24.399 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.399 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:24.660 NVMe0n1 00:11:24.660 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.660 10:39:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:24.660 Running I/O for 10 seconds... 00:11:26.988 8880.00 IOPS, 34.69 MiB/s [2024-11-19T09:39:07.126Z] 9198.00 IOPS, 35.93 MiB/s [2024-11-19T09:39:08.068Z] 10240.00 IOPS, 40.00 MiB/s [2024-11-19T09:39:09.011Z] 11008.00 IOPS, 43.00 MiB/s [2024-11-19T09:39:09.951Z] 11500.80 IOPS, 44.92 MiB/s [2024-11-19T09:39:10.895Z] 11930.33 IOPS, 46.60 MiB/s [2024-11-19T09:39:12.278Z] 12136.57 IOPS, 47.41 MiB/s [2024-11-19T09:39:13.218Z] 12369.12 IOPS, 48.32 MiB/s [2024-11-19T09:39:14.160Z] 12508.67 IOPS, 48.86 MiB/s [2024-11-19T09:39:14.160Z] 12657.20 IOPS, 49.44 MiB/s 00:11:34.965 Latency(us) 00:11:34.965 [2024-11-19T09:39:14.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:34.965 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:34.965 Verification LBA range: start 0x0 length 0x4000 00:11:34.965 NVMe0n1 : 10.06 12682.32 49.54 0.00 0.00 80467.55 18240.85 71215.79 00:11:34.965 [2024-11-19T09:39:14.160Z] =================================================================================================================== 00:11:34.965 [2024-11-19T09:39:14.160Z] Total : 12682.32 49.54 0.00 0.00 80467.55 18240.85 71215.79 00:11:34.965 { 00:11:34.965 "results": [ 00:11:34.965 { 00:11:34.965 "job": "NVMe0n1", 00:11:34.965 "core_mask": "0x1", 00:11:34.965 "workload": "verify", 00:11:34.965 "status": "finished", 00:11:34.965 "verify_range": { 00:11:34.965 "start": 0, 00:11:34.965 "length": 16384 00:11:34.965 }, 00:11:34.965 "queue_depth": 1024, 00:11:34.965 "io_size": 4096, 00:11:34.965 "runtime": 10.056602, 00:11:34.965 "iops": 12682.315557481543, 00:11:34.965 "mibps": 49.54029514641228, 00:11:34.965 "io_failed": 0, 00:11:34.965 "io_timeout": 0, 00:11:34.965 "avg_latency_us": 80467.55107560183, 00:11:34.965 "min_latency_us": 18240.853333333333, 00:11:34.965 "max_latency_us": 71215.78666666667 00:11:34.965 } 00:11:34.965 ], 00:11:34.965 "core_count": 1 00:11:34.965 } 00:11:34.965 10:39:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 854804 00:11:34.965 10:39:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 854804 ']' 00:11:34.965 10:39:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 854804 00:11:34.965 10:39:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:34.965 10:39:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.965 10:39:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 854804 00:11:34.965 10:39:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:34.965 10:39:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:34.965 10:39:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 854804' 00:11:34.965 killing process with pid 854804 00:11:34.965 10:39:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 854804 00:11:34.965 Received shutdown signal, test time was about 10.000000 seconds 00:11:34.965 00:11:34.965 Latency(us) 00:11:34.965 [2024-11-19T09:39:14.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:34.965 [2024-11-19T09:39:14.160Z] =================================================================================================================== 00:11:34.965 [2024-11-19T09:39:14.160Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:34.965 10:39:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 854804 00:11:34.965 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:34.965 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:34.965 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:34.965 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:34.965 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:34.965 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:34.965 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:34.965 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:34.965 rmmod nvme_tcp 00:11:34.965 rmmod nvme_fabrics 00:11:34.965 rmmod nvme_keyring 00:11:34.965 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:34.965 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:34.965 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:34.965 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 854482 ']' 00:11:34.965 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 854482 00:11:34.965 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 854482 ']' 00:11:34.965 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 854482 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 854482 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 854482' 00:11:35.226 killing process with pid 854482 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 854482 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 854482 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.226 10:39:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:37.772 00:11:37.772 real 0m21.950s 00:11:37.772 user 0m24.538s 00:11:37.772 sys 0m7.042s 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:37.772 ************************************ 00:11:37.772 END TEST nvmf_queue_depth 00:11:37.772 ************************************ 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:37.772 ************************************ 00:11:37.772 START TEST nvmf_target_multipath 00:11:37.772 ************************************ 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:37.772 * Looking for test storage... 00:11:37.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:37.772 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:37.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.773 --rc genhtml_branch_coverage=1 00:11:37.773 --rc genhtml_function_coverage=1 00:11:37.773 --rc genhtml_legend=1 00:11:37.773 --rc geninfo_all_blocks=1 00:11:37.773 --rc geninfo_unexecuted_blocks=1 00:11:37.773 00:11:37.773 ' 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:37.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.773 --rc genhtml_branch_coverage=1 00:11:37.773 --rc genhtml_function_coverage=1 00:11:37.773 --rc genhtml_legend=1 00:11:37.773 --rc geninfo_all_blocks=1 00:11:37.773 --rc geninfo_unexecuted_blocks=1 00:11:37.773 00:11:37.773 ' 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:37.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.773 --rc genhtml_branch_coverage=1 00:11:37.773 --rc genhtml_function_coverage=1 00:11:37.773 --rc genhtml_legend=1 00:11:37.773 --rc geninfo_all_blocks=1 00:11:37.773 --rc geninfo_unexecuted_blocks=1 00:11:37.773 00:11:37.773 ' 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:37.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.773 --rc genhtml_branch_coverage=1 00:11:37.773 --rc genhtml_function_coverage=1 00:11:37.773 --rc genhtml_legend=1 00:11:37.773 --rc geninfo_all_blocks=1 00:11:37.773 --rc geninfo_unexecuted_blocks=1 00:11:37.773 00:11:37.773 ' 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:37.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:37.773 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.774 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.774 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.774 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:37.774 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:37.774 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:11:37.774 10:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:45.918 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:45.918 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:45.918 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:45.918 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.918 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:45.919 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:45.919 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:45.919 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:45.919 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:45.919 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:45.919 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:45.919 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.919 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:45.919 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:45.919 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:45.919 10:39:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:45.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:11:45.919 00:11:45.919 --- 10.0.0.2 ping statistics --- 00:11:45.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.919 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:45.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:45.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:11:45.919 00:11:45.919 --- 10.0.0.1 ping statistics --- 00:11:45.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.919 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:45.919 only one NIC for nvmf test 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:45.919 rmmod nvme_tcp 00:11:45.919 rmmod nvme_fabrics 00:11:45.919 rmmod nvme_keyring 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.919 10:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:47.303 00:11:47.303 real 0m9.984s 00:11:47.303 user 0m2.196s 00:11:47.303 sys 0m5.736s 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.303 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:47.303 ************************************ 00:11:47.303 END TEST nvmf_target_multipath 00:11:47.303 ************************************ 00:11:47.563 10:39:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:47.563 10:39:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:47.563 10:39:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.563 10:39:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:47.563 ************************************ 00:11:47.563 START TEST nvmf_zcopy 00:11:47.563 ************************************ 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:47.564 * Looking for test storage... 00:11:47.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.564 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:47.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.825 --rc genhtml_branch_coverage=1 00:11:47.825 --rc genhtml_function_coverage=1 00:11:47.825 --rc genhtml_legend=1 00:11:47.825 --rc geninfo_all_blocks=1 00:11:47.825 --rc geninfo_unexecuted_blocks=1 00:11:47.825 00:11:47.825 ' 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:47.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.825 --rc genhtml_branch_coverage=1 00:11:47.825 --rc genhtml_function_coverage=1 00:11:47.825 --rc genhtml_legend=1 00:11:47.825 --rc geninfo_all_blocks=1 00:11:47.825 --rc geninfo_unexecuted_blocks=1 00:11:47.825 00:11:47.825 ' 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:47.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.825 --rc genhtml_branch_coverage=1 00:11:47.825 --rc genhtml_function_coverage=1 00:11:47.825 --rc genhtml_legend=1 00:11:47.825 --rc geninfo_all_blocks=1 00:11:47.825 --rc geninfo_unexecuted_blocks=1 00:11:47.825 00:11:47.825 ' 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:47.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.825 --rc genhtml_branch_coverage=1 00:11:47.825 --rc genhtml_function_coverage=1 00:11:47.825 --rc genhtml_legend=1 00:11:47.825 --rc geninfo_all_blocks=1 00:11:47.825 --rc geninfo_unexecuted_blocks=1 00:11:47.825 00:11:47.825 ' 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.825 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:47.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:11:47.826 10:39:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:55.963 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:55.963 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:55.963 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:55.964 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:55.964 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:55.964 10:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:55.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:11:55.964 00:11:55.964 --- 10.0.0.2 ping statistics --- 00:11:55.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.964 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:55.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:11:55.964 00:11:55.964 --- 10.0.0.1 ping statistics --- 00:11:55.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.964 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=865792 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 865792 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 865792 ']' 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.964 10:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:55.964 [2024-11-19 10:39:34.359485] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:11:55.964 [2024-11-19 10:39:34.359555] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.964 [2024-11-19 10:39:34.458425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.964 [2024-11-19 10:39:34.509636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.964 [2024-11-19 10:39:34.509686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.964 [2024-11-19 10:39:34.509695] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.964 [2024-11-19 10:39:34.509702] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.964 [2024-11-19 10:39:34.509708] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.964 [2024-11-19 10:39:34.510527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:56.225 [2024-11-19 10:39:35.217926] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:56.225 [2024-11-19 10:39:35.242166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:56.225 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.226 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:56.226 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.226 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:56.226 malloc0 00:11:56.226 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.226 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:56.226 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.226 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:56.226 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.226 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:56.226 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:56.226 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:56.226 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:56.226 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:56.226 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:56.226 { 00:11:56.226 "params": { 00:11:56.226 "name": "Nvme$subsystem", 00:11:56.226 "trtype": "$TEST_TRANSPORT", 00:11:56.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:56.226 "adrfam": "ipv4", 00:11:56.226 "trsvcid": "$NVMF_PORT", 00:11:56.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:56.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:56.226 "hdgst": ${hdgst:-false}, 00:11:56.226 "ddgst": ${ddgst:-false} 00:11:56.226 }, 00:11:56.226 "method": "bdev_nvme_attach_controller" 00:11:56.226 } 00:11:56.226 EOF 00:11:56.226 )") 00:11:56.226 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:56.226 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:56.226 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:56.226 10:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:56.226 "params": { 00:11:56.226 "name": "Nvme1", 00:11:56.226 "trtype": "tcp", 00:11:56.226 "traddr": "10.0.0.2", 00:11:56.226 "adrfam": "ipv4", 00:11:56.226 "trsvcid": "4420", 00:11:56.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:56.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:56.226 "hdgst": false, 00:11:56.226 "ddgst": false 00:11:56.226 }, 00:11:56.226 "method": "bdev_nvme_attach_controller" 00:11:56.226 }' 00:11:56.226 [2024-11-19 10:39:35.342342] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:11:56.226 [2024-11-19 10:39:35.342409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid866001 ] 00:11:56.487 [2024-11-19 10:39:35.434222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.487 [2024-11-19 10:39:35.486562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.748 Running I/O for 10 seconds... 00:11:59.075 7521.00 IOPS, 58.76 MiB/s [2024-11-19T09:39:39.210Z] 8537.00 IOPS, 66.70 MiB/s [2024-11-19T09:39:40.150Z] 8878.33 IOPS, 69.36 MiB/s [2024-11-19T09:39:41.139Z] 9057.25 IOPS, 70.76 MiB/s [2024-11-19T09:39:42.179Z] 9162.00 IOPS, 71.58 MiB/s [2024-11-19T09:39:43.210Z] 9232.83 IOPS, 72.13 MiB/s [2024-11-19T09:39:44.185Z] 9278.86 IOPS, 72.49 MiB/s [2024-11-19T09:39:45.126Z] 9316.75 IOPS, 72.79 MiB/s [2024-11-19T09:39:46.067Z] 9346.33 IOPS, 73.02 MiB/s [2024-11-19T09:39:46.067Z] 9371.20 IOPS, 73.21 MiB/s 00:12:06.872 Latency(us) 00:12:06.872 [2024-11-19T09:39:46.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:06.872 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:06.872 Verification LBA range: start 0x0 length 0x1000 00:12:06.872 Nvme1n1 : 10.01 9374.74 73.24 0.00 0.00 13607.50 2293.76 27415.89 00:12:06.872 [2024-11-19T09:39:46.067Z] =================================================================================================================== 00:12:06.872 [2024-11-19T09:39:46.067Z] Total : 9374.74 73.24 0.00 0.00 13607.50 2293.76 27415.89 00:12:06.872 10:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=868039 00:12:06.872 10:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:06.872 10:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:06.872 10:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:06.872 10:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:06.872 10:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:06.872 10:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:06.872 10:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:06.872 10:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:06.872 { 00:12:06.872 "params": { 00:12:06.872 "name": "Nvme$subsystem", 00:12:06.872 "trtype": "$TEST_TRANSPORT", 00:12:06.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:06.872 "adrfam": "ipv4", 00:12:06.872 "trsvcid": "$NVMF_PORT", 00:12:06.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:06.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:06.872 "hdgst": ${hdgst:-false}, 00:12:06.872 "ddgst": ${ddgst:-false} 00:12:06.872 }, 00:12:06.873 "method": "bdev_nvme_attach_controller" 00:12:06.873 } 00:12:06.873 EOF 00:12:06.873 )") 00:12:06.873 10:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:06.873 [2024-11-19 10:39:45.993136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.873 [2024-11-19 10:39:45.993170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.873 10:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:06.873 10:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:06.873 10:39:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:06.873 "params": { 00:12:06.873 "name": "Nvme1", 00:12:06.873 "trtype": "tcp", 00:12:06.873 "traddr": "10.0.0.2", 00:12:06.873 "adrfam": "ipv4", 00:12:06.873 "trsvcid": "4420", 00:12:06.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:06.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:06.873 "hdgst": false, 00:12:06.873 "ddgst": false 00:12:06.873 }, 00:12:06.873 "method": "bdev_nvme_attach_controller" 00:12:06.873 }' 00:12:06.873 [2024-11-19 10:39:46.005138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.873 [2024-11-19 10:39:46.005148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.873 [2024-11-19 10:39:46.017168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.873 [2024-11-19 10:39:46.017177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.873 [2024-11-19 10:39:46.029196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.873 [2024-11-19 10:39:46.029205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.873 [2024-11-19 10:39:46.041223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.873 [2024-11-19 10:39:46.041241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.873 [2024-11-19 10:39:46.044916] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:12:06.873 [2024-11-19 10:39:46.044965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868039 ] 00:12:06.873 [2024-11-19 10:39:46.053253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.873 [2024-11-19 10:39:46.053261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.873 [2024-11-19 10:39:46.065284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.873 [2024-11-19 10:39:46.065292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.077314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.077323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.089344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.089352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.101375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.101383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.113404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.113413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.125436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.125444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.129653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.133 [2024-11-19 10:39:46.137467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.137476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.149496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.149505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.159239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.133 [2024-11-19 10:39:46.161526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.161535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.173562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.173572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.185589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.185602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.197619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.197630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.209651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.209660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.221679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.221687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.233719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.233736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.245742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.245753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.257770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.257780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.269803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.269811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.281832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.281841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.293865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.293875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.305898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.305908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.133 [2024-11-19 10:39:46.317928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.133 [2024-11-19 10:39:46.317936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.329962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.329970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.341991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.341999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.354024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.354034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.366055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.366068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.378089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.378099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.390121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.390130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.402151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.402163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.414188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.414196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.426216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.426224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.438249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.438257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.450288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.450303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 Running I/O for 5 seconds... 00:12:07.394 [2024-11-19 10:39:46.462314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.462323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.476933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.476950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.490362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.490380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.503660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.503678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.516941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.516957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.530328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.530344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.543083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.543099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.555810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.555826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.569144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.569166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.394 [2024-11-19 10:39:46.582334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.394 [2024-11-19 10:39:46.582350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.595486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.595502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.609336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.609357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.622606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.622622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.635800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.635816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.648376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.648392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.660819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.660835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.673931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.673947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.686834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.686850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.700143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.700163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.713437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.713453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.726534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.726549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.740049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.740064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.753703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.753719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.767145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.767165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.780592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.780607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.792884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.792899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.805896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.805912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.819145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.819164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.832748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.832763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.654 [2024-11-19 10:39:46.845272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.654 [2024-11-19 10:39:46.845287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.914 [2024-11-19 10:39:46.857983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.914 [2024-11-19 10:39:46.858002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.914 [2024-11-19 10:39:46.871295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.914 [2024-11-19 10:39:46.871310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.914 [2024-11-19 10:39:46.884172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.914 [2024-11-19 10:39:46.884188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.914 [2024-11-19 10:39:46.897207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.914 [2024-11-19 10:39:46.897222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.914 [2024-11-19 10:39:46.910713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.914 [2024-11-19 10:39:46.910729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.914 [2024-11-19 10:39:46.923111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.914 [2024-11-19 10:39:46.923126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.914 [2024-11-19 10:39:46.936678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.914 [2024-11-19 10:39:46.936693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.914 [2024-11-19 10:39:46.950179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.914 [2024-11-19 10:39:46.950194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.914 [2024-11-19 10:39:46.963641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.914 [2024-11-19 10:39:46.963656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.914 [2024-11-19 10:39:46.976408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.914 [2024-11-19 10:39:46.976423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.914 [2024-11-19 10:39:46.989647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.914 [2024-11-19 10:39:46.989663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.914 [2024-11-19 10:39:47.002686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.914 [2024-11-19 10:39:47.002701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.914 [2024-11-19 10:39:47.015637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.914 [2024-11-19 10:39:47.015652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.914 [2024-11-19 10:39:47.028922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.914 [2024-11-19 10:39:47.028936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.914 [2024-11-19 10:39:47.042239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.914 [2024-11-19 10:39:47.042254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.914 [2024-11-19 10:39:47.055123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.914 [2024-11-19 10:39:47.055138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.914 [2024-11-19 10:39:47.068390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.914 [2024-11-19 10:39:47.068405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.915 [2024-11-19 10:39:47.081778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.915 [2024-11-19 10:39:47.081794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.915 [2024-11-19 10:39:47.095311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.915 [2024-11-19 10:39:47.095326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.915 [2024-11-19 10:39:47.108080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.915 [2024-11-19 10:39:47.108095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.174 [2024-11-19 10:39:47.120369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.174 [2024-11-19 10:39:47.120384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.174 [2024-11-19 10:39:47.133793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.174 [2024-11-19 10:39:47.133808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.175 [2024-11-19 10:39:47.146866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.175 [2024-11-19 10:39:47.146881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.175 [2024-11-19 10:39:47.159536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.175 [2024-11-19 10:39:47.159551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.175 [2024-11-19 10:39:47.172329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.175 [2024-11-19 10:39:47.172344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.175 [2024-11-19 10:39:47.185517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.175 [2024-11-19 10:39:47.185532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.175 [2024-11-19 10:39:47.199005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.175 [2024-11-19 10:39:47.199020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.175 [2024-11-19 10:39:47.211622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.175 [2024-11-19 10:39:47.211636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.175 [2024-11-19 10:39:47.224538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.175 [2024-11-19 10:39:47.224553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.175 [2024-11-19 10:39:47.237350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.175 [2024-11-19 10:39:47.237365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.175 [2024-11-19 10:39:47.250400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.175 [2024-11-19 10:39:47.250415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.175 [2024-11-19 10:39:47.263316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.175 [2024-11-19 10:39:47.263331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.175 [2024-11-19 10:39:47.276570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.175 [2024-11-19 10:39:47.276584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.175 [2024-11-19 10:39:47.289683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.175 [2024-11-19 10:39:47.289699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.175 [2024-11-19 10:39:47.302623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.175 [2024-11-19 10:39:47.302638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.175 [2024-11-19 10:39:47.315920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.175 [2024-11-19 10:39:47.315935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.175 [2024-11-19 10:39:47.328825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.175 [2024-11-19 10:39:47.328840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.175 [2024-11-19 10:39:47.342088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.175 [2024-11-19 10:39:47.342103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.175 [2024-11-19 10:39:47.355440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.175 [2024-11-19 10:39:47.355456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.175 [2024-11-19 10:39:47.368573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.175 [2024-11-19 10:39:47.368589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.435 [2024-11-19 10:39:47.381512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.435 [2024-11-19 10:39:47.381527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.435 [2024-11-19 10:39:47.394719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.435 [2024-11-19 10:39:47.394734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.435 [2024-11-19 10:39:47.407830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.435 [2024-11-19 10:39:47.407845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.435 [2024-11-19 10:39:47.421030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.435 [2024-11-19 10:39:47.421044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.435 [2024-11-19 10:39:47.433572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.436 [2024-11-19 10:39:47.433587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.436 [2024-11-19 10:39:47.447194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.436 [2024-11-19 10:39:47.447209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.436 [2024-11-19 10:39:47.459755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.436 [2024-11-19 10:39:47.459770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.436 18955.00 IOPS, 148.09 MiB/s [2024-11-19T09:39:47.631Z] [2024-11-19 10:39:47.473363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.436 [2024-11-19 10:39:47.473378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.436 [2024-11-19 10:39:47.486529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.436 [2024-11-19 10:39:47.486544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.436 [2024-11-19 10:39:47.500010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.436 [2024-11-19 10:39:47.500026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.436 [2024-11-19 10:39:47.513432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.436 [2024-11-19 10:39:47.513448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.436 [2024-11-19 10:39:47.525839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.436 [2024-11-19 10:39:47.525854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.436 [2024-11-19 10:39:47.538608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.436 [2024-11-19 10:39:47.538624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.436 [2024-11-19 10:39:47.551760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.436 [2024-11-19 10:39:47.551775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.436 [2024-11-19 10:39:47.565239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.436 [2024-11-19 10:39:47.565254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.436 [2024-11-19 10:39:47.578491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.436 [2024-11-19 10:39:47.578506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.436 [2024-11-19 10:39:47.592018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.436 [2024-11-19 10:39:47.592038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.436 [2024-11-19 10:39:47.605137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.436 [2024-11-19 10:39:47.605153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.436 [2024-11-19 10:39:47.618711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.436 [2024-11-19 10:39:47.618726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.697 [2024-11-19 10:39:47.632076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.697 [2024-11-19 10:39:47.632091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.645024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.645039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.657874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.657890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.671327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.671343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.684298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.684314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.696861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.696877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.710056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.710072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.722475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.722491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.735283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.735299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.747921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.747938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.760417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.760432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.773021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.773036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.785558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.785572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.798398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.798413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.810920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.810936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.824614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.824629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.837495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.837514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.851028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.851044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.864052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.864068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.877682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.877697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.698 [2024-11-19 10:39:47.890608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.698 [2024-11-19 10:39:47.890624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:47.903865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:47.903880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:47.916983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:47.916999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:47.930328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:47.930343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:47.942909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:47.942925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:47.955694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:47.955710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:47.968814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:47.968829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:47.981867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:47.981883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:47.994730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:47.994745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:48.007931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:48.007946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:48.021443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:48.021459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:48.034409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:48.034424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:48.047768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:48.047783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:48.061019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:48.061035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:48.074464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:48.074480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:48.087044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:48.087064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:48.100561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:48.100577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:48.113178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:48.113193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:48.125905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:48.125921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:48.138423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:48.138438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.958 [2024-11-19 10:39:48.152204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.958 [2024-11-19 10:39:48.152219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.218 [2024-11-19 10:39:48.165262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.218 [2024-11-19 10:39:48.165278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.218 [2024-11-19 10:39:48.177607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.218 [2024-11-19 10:39:48.177623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.218 [2024-11-19 10:39:48.190369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.218 [2024-11-19 10:39:48.190385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.218 [2024-11-19 10:39:48.203665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.218 [2024-11-19 10:39:48.203681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.218 [2024-11-19 10:39:48.217100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.218 [2024-11-19 10:39:48.217116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.218 [2024-11-19 10:39:48.229929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.218 [2024-11-19 10:39:48.229945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.218 [2024-11-19 10:39:48.242752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.218 [2024-11-19 10:39:48.242768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.218 [2024-11-19 10:39:48.256344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.218 [2024-11-19 10:39:48.256359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.218 [2024-11-19 10:39:48.268771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.218 [2024-11-19 10:39:48.268787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.218 [2024-11-19 10:39:48.282188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.218 [2024-11-19 10:39:48.282205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.218 [2024-11-19 10:39:48.295410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.218 [2024-11-19 10:39:48.295426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.218 [2024-11-19 10:39:48.307924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.218 [2024-11-19 10:39:48.307939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.218 [2024-11-19 10:39:48.320654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.218 [2024-11-19 10:39:48.320669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.218 [2024-11-19 10:39:48.333483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.218 [2024-11-19 10:39:48.333503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.218 [2024-11-19 10:39:48.346820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.218 [2024-11-19 10:39:48.346835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.218 [2024-11-19 10:39:48.360490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.218 [2024-11-19 10:39:48.360505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.218 [2024-11-19 10:39:48.373257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.218 [2024-11-19 10:39:48.373273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.218 [2024-11-19 10:39:48.386582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.218 [2024-11-19 10:39:48.386598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.218 [2024-11-19 10:39:48.400329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.218 [2024-11-19 10:39:48.400345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.478 [2024-11-19 10:39:48.413236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.478 [2024-11-19 10:39:48.413252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.478 [2024-11-19 10:39:48.426476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.478 [2024-11-19 10:39:48.426492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.478 [2024-11-19 10:39:48.440012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.478 [2024-11-19 10:39:48.440028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.478 [2024-11-19 10:39:48.453737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.478 [2024-11-19 10:39:48.453752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.478 19023.50 IOPS, 148.62 MiB/s [2024-11-19T09:39:48.673Z] [2024-11-19 10:39:48.466917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.478 [2024-11-19 10:39:48.466933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.478 [2024-11-19 10:39:48.480398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.479 [2024-11-19 10:39:48.480413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.479 [2024-11-19 10:39:48.493313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.479 [2024-11-19 10:39:48.493328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.479 [2024-11-19 10:39:48.506126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.479 [2024-11-19 10:39:48.506141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.479 [2024-11-19 10:39:48.518655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.479 [2024-11-19 10:39:48.518670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.479 [2024-11-19 10:39:48.532128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.479 [2024-11-19 10:39:48.532144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.479 [2024-11-19 10:39:48.545014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.479 [2024-11-19 10:39:48.545030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.479 [2024-11-19 10:39:48.558076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.479 [2024-11-19 10:39:48.558092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.479 [2024-11-19 10:39:48.570911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.479 [2024-11-19 10:39:48.570926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.479 [2024-11-19 10:39:48.583617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.479 [2024-11-19 10:39:48.583632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.479 [2024-11-19 10:39:48.596341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.479 [2024-11-19 10:39:48.596356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.479 [2024-11-19 10:39:48.609591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.479 [2024-11-19 10:39:48.609606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.479 [2024-11-19 10:39:48.623183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.479 [2024-11-19 10:39:48.623199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.479 [2024-11-19 10:39:48.636054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.479 [2024-11-19 10:39:48.636070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.479 [2024-11-19 10:39:48.649315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.479 [2024-11-19 10:39:48.649330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.479 [2024-11-19 10:39:48.662254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.479 [2024-11-19 10:39:48.662269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.674868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.674883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.687412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.687428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.700236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.700251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.713417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.713432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.726013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.726028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.738524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.738539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.751472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.751486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.765056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.765072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.776776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.776791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.789716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.789732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.802558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.802573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.816143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.816161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.829070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.829085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.842219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.842234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.855090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.855105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.868618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.868633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.881612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.881627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.894583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.894598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.908218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.908233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.921170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.921185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.740 [2024-11-19 10:39:48.933993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.740 [2024-11-19 10:39:48.934008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:48.947276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:48.947292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:48.960770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:48.960785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:48.973759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:48.973774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:48.987396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:48.987411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:48.999995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:49.000010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:49.013027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:49.013042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:49.026149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:49.026168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:49.039801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:49.039815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:49.052402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:49.052418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:49.064950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:49.064965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:49.077230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:49.077245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:49.090174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:49.090189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:49.103061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:49.103076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:49.116521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:49.116536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:49.129887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:49.129902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:49.143135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:49.143150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:49.156295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:49.156310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:49.169626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:49.169641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:49.182585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:49.182600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.001 [2024-11-19 10:39:49.194987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.001 [2024-11-19 10:39:49.195002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.208289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.208304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.221983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.221998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.234617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.234633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.247346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.247361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.260727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.260742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.273915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.273931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.286552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.286567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.298960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.298975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.312455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.312474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.325168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.325183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.338566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.338582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.351991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.352006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.365072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.365087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.378043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.378058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.390563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.390578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.403361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.403376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.416475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.416489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.429018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.429033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.442527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.442543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.261 [2024-11-19 10:39:49.455772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.261 [2024-11-19 10:39:49.455787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.522 19047.67 IOPS, 148.81 MiB/s [2024-11-19T09:39:49.717Z] [2024-11-19 10:39:49.469323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.522 [2024-11-19 10:39:49.469339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.522 [2024-11-19 10:39:49.482846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.522 [2024-11-19 10:39:49.482863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.522 [2024-11-19 10:39:49.495721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.522 [2024-11-19 10:39:49.495737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.522 [2024-11-19 10:39:49.509152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.522 [2024-11-19 10:39:49.509172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.522 [2024-11-19 10:39:49.522116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.522 [2024-11-19 10:39:49.522132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.522 [2024-11-19 10:39:49.535287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.522 [2024-11-19 10:39:49.535302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.522 [2024-11-19 10:39:49.548129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.522 [2024-11-19 10:39:49.548144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.522 [2024-11-19 10:39:49.560628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.522 [2024-11-19 10:39:49.560647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.522 [2024-11-19 10:39:49.573830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.522 [2024-11-19 10:39:49.573846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.522 [2024-11-19 10:39:49.586576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.522 [2024-11-19 10:39:49.586594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.522 [2024-11-19 10:39:49.600352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.523 [2024-11-19 10:39:49.600367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.523 [2024-11-19 10:39:49.612972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.523 [2024-11-19 10:39:49.612988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.523 [2024-11-19 10:39:49.626304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.523 [2024-11-19 10:39:49.626319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.523 [2024-11-19 10:39:49.639067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.523 [2024-11-19 10:39:49.639083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.523 [2024-11-19 10:39:49.652434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.523 [2024-11-19 10:39:49.652450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.523 [2024-11-19 10:39:49.665878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.523 [2024-11-19 10:39:49.665894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.523 [2024-11-19 10:39:49.678402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.523 [2024-11-19 10:39:49.678417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.523 [2024-11-19 10:39:49.691040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.523 [2024-11-19 10:39:49.691055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.523 [2024-11-19 10:39:49.704339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.523 [2024-11-19 10:39:49.704355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.717932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.717948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.731081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.731097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.743528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.743544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.756447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.756462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.768800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.768816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.781309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.781324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.794815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.794831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.807996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.808017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.820951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.820967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.833706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.833721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.846418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.846433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.858950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.858965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.871621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.871637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.884208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.884224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.896440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.896456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.909548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.909563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.922450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.922466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.935688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.935703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.948211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.948227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.960604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.960619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.784 [2024-11-19 10:39:49.973251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.784 [2024-11-19 10:39:49.973267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.046 [2024-11-19 10:39:49.986479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.046 [2024-11-19 10:39:49.986495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.046 [2024-11-19 10:39:49.999364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.046 [2024-11-19 10:39:49.999379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.046 [2024-11-19 10:39:50.012008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.046 [2024-11-19 10:39:50.012026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.046 [2024-11-19 10:39:50.024815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.046 [2024-11-19 10:39:50.024831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.046 [2024-11-19 10:39:50.037927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.046 [2024-11-19 10:39:50.037943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.046 [2024-11-19 10:39:50.050576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.046 [2024-11-19 10:39:50.050592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.046 [2024-11-19 10:39:50.063306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.046 [2024-11-19 10:39:50.063321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.046 [2024-11-19 10:39:50.076691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.046 [2024-11-19 10:39:50.076707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.046 [2024-11-19 10:39:50.089059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.046 [2024-11-19 10:39:50.089074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.046 [2024-11-19 10:39:50.102237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.046 [2024-11-19 10:39:50.102253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.046 [2024-11-19 10:39:50.115075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.046 [2024-11-19 10:39:50.115091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.046 [2024-11-19 10:39:50.128316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.046 [2024-11-19 10:39:50.128331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.046 [2024-11-19 10:39:50.141117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.046 [2024-11-19 10:39:50.141133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.046 [2024-11-19 10:39:50.153495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.046 [2024-11-19 10:39:50.153510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.046 [2024-11-19 10:39:50.166717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.046 [2024-11-19 10:39:50.166733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.046 [2024-11-19 10:39:50.179906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.046 [2024-11-19 10:39:50.179921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.046 [2024-11-19 10:39:50.192985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.046 [2024-11-19 10:39:50.193000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.046 [2024-11-19 10:39:50.206384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.046 [2024-11-19 10:39:50.206399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.046 [2024-11-19 10:39:50.219673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.047 [2024-11-19 10:39:50.219688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.047 [2024-11-19 10:39:50.232463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.047 [2024-11-19 10:39:50.232478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 [2024-11-19 10:39:50.245553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.245568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 [2024-11-19 10:39:50.259012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.259027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 [2024-11-19 10:39:50.272590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.272604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 [2024-11-19 10:39:50.284857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.284873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 [2024-11-19 10:39:50.297502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.297516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 [2024-11-19 10:39:50.310558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.310572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 [2024-11-19 10:39:50.323585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.323600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 [2024-11-19 10:39:50.336726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.336741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 [2024-11-19 10:39:50.349863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.349877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 [2024-11-19 10:39:50.362633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.362648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 [2024-11-19 10:39:50.375886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.375900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 [2024-11-19 10:39:50.389278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.389293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 [2024-11-19 10:39:50.402051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.402066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 [2024-11-19 10:39:50.415610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.415625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 [2024-11-19 10:39:50.428845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.428860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 [2024-11-19 10:39:50.442224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.442238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 [2024-11-19 10:39:50.455152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.455171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 [2024-11-19 10:39:50.468629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.468645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 19070.50 IOPS, 148.99 MiB/s [2024-11-19T09:39:50.503Z] [2024-11-19 10:39:50.481229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.481244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.308 [2024-11-19 10:39:50.493792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.308 [2024-11-19 10:39:50.493806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.506950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.506966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.519971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.519986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.532956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.532972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.546301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.546316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.559668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.559683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.573342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.573357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.586707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.586721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.599608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.599623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.612575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.612590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.625689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.625704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.639283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.639298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.652183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.652198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.665223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.665239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.677369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.677383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.690613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.690628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.703107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.703122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.715882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.715896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.728835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.728850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.741784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.741800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.569 [2024-11-19 10:39:50.755250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.569 [2024-11-19 10:39:50.755266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.829 [2024-11-19 10:39:50.768116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.829 [2024-11-19 10:39:50.768132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.829 [2024-11-19 10:39:50.781483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.829 [2024-11-19 10:39:50.781506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.829 [2024-11-19 10:39:50.794569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.829 [2024-11-19 10:39:50.794584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.829 [2024-11-19 10:39:50.807996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.829 [2024-11-19 10:39:50.808011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.829 [2024-11-19 10:39:50.821568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.829 [2024-11-19 10:39:50.821583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.829 [2024-11-19 10:39:50.834472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.829 [2024-11-19 10:39:50.834488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.829 [2024-11-19 10:39:50.848137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.829 [2024-11-19 10:39:50.848152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.829 [2024-11-19 10:39:50.861255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.829 [2024-11-19 10:39:50.861270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.829 [2024-11-19 10:39:50.874352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.829 [2024-11-19 10:39:50.874367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.829 [2024-11-19 10:39:50.887464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.829 [2024-11-19 10:39:50.887479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.829 [2024-11-19 10:39:50.900537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.829 [2024-11-19 10:39:50.900553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.829 [2024-11-19 10:39:50.913673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.829 [2024-11-19 10:39:50.913689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.829 [2024-11-19 10:39:50.927130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.829 [2024-11-19 10:39:50.927146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.829 [2024-11-19 10:39:50.939826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.830 [2024-11-19 10:39:50.939841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.830 [2024-11-19 10:39:50.953352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.830 [2024-11-19 10:39:50.953367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.830 [2024-11-19 10:39:50.966569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.830 [2024-11-19 10:39:50.966584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.830 [2024-11-19 10:39:50.979805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.830 [2024-11-19 10:39:50.979821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.830 [2024-11-19 10:39:50.992904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.830 [2024-11-19 10:39:50.992919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.830 [2024-11-19 10:39:51.006342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.830 [2024-11-19 10:39:51.006358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.830 [2024-11-19 10:39:51.019842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.830 [2024-11-19 10:39:51.019857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.032910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.032930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.046607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.046623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.060050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.060065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.073240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.073255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.086328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.086344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.099825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.099840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.113464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.113479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.126472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.126489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.139773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.139789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.152385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.152401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.165994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.166011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.178532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.178548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.191082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.191098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.204351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.204366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.217553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.217569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.230608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.230624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.244220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.244236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.257118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.257134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.271020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.271036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.090 [2024-11-19 10:39:51.284194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.090 [2024-11-19 10:39:51.284213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.351 [2024-11-19 10:39:51.296913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.351 [2024-11-19 10:39:51.296929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.351 [2024-11-19 10:39:51.310299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.351 [2024-11-19 10:39:51.310314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.351 [2024-11-19 10:39:51.323179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.351 [2024-11-19 10:39:51.323195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.351 [2024-11-19 10:39:51.336490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.351 [2024-11-19 10:39:51.336505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.351 [2024-11-19 10:39:51.349237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.351 [2024-11-19 10:39:51.349252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.351 [2024-11-19 10:39:51.362563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.351 [2024-11-19 10:39:51.362579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.351 [2024-11-19 10:39:51.375895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.351 [2024-11-19 10:39:51.375911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.351 [2024-11-19 10:39:51.388769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.351 [2024-11-19 10:39:51.388785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.351 [2024-11-19 10:39:51.401995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.351 [2024-11-19 10:39:51.402010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.351 [2024-11-19 10:39:51.414939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.351 [2024-11-19 10:39:51.414955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.351 [2024-11-19 10:39:51.428326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.351 [2024-11-19 10:39:51.428341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.351 [2024-11-19 10:39:51.441577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.351 [2024-11-19 10:39:51.441593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.351 [2024-11-19 10:39:51.455164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.352 [2024-11-19 10:39:51.455179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.352 [2024-11-19 10:39:51.468424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.352 [2024-11-19 10:39:51.468440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.352 19084.40 IOPS, 149.10 MiB/s 00:12:12.352 Latency(us) 00:12:12.352 [2024-11-19T09:39:51.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.352 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:12.352 Nvme1n1 : 5.00 19093.54 149.17 0.00 0.00 6698.75 3017.39 16384.00 00:12:12.352 [2024-11-19T09:39:51.547Z] =================================================================================================================== 00:12:12.352 [2024-11-19T09:39:51.547Z] Total : 19093.54 149.17 0.00 0.00 6698.75 3017.39 16384.00 00:12:12.352 [2024-11-19 10:39:51.478405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.352 [2024-11-19 10:39:51.478420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.352 [2024-11-19 10:39:51.490436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.352 [2024-11-19 10:39:51.490448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.352 [2024-11-19 10:39:51.502467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.352 [2024-11-19 10:39:51.502479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.352 [2024-11-19 10:39:51.514500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.352 [2024-11-19 10:39:51.514512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.352 [2024-11-19 10:39:51.526529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.352 [2024-11-19 10:39:51.526541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.352 [2024-11-19 10:39:51.538554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.352 [2024-11-19 10:39:51.538564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.613 [2024-11-19 10:39:51.550585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.613 [2024-11-19 10:39:51.550595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.613 [2024-11-19 10:39:51.562618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.613 [2024-11-19 10:39:51.562628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.613 [2024-11-19 10:39:51.574648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.613 [2024-11-19 10:39:51.574657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (868039) - No such process 00:12:12.613 10:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 868039 00:12:12.613 10:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.613 10:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.613 10:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:12.613 10:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.613 10:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:12.613 10:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.613 10:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:12.613 delay0 00:12:12.613 10:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.613 10:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:12.613 10:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.613 10:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:12.613 10:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.613 10:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:12.613 [2024-11-19 10:39:51.794329] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:20.752 [2024-11-19 10:39:58.867912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8d680 is same with the state(6) to be set 00:12:20.752 Initializing NVMe Controllers 00:12:20.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:20.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:20.752 Initialization complete. Launching workers. 00:12:20.752 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 232, failed: 33062 00:12:20.752 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 33175, failed to submit 119 00:12:20.752 success 33088, unsuccessful 87, failed 0 00:12:20.752 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:20.752 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:20.752 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:20.752 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:20.752 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:20.752 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:20.752 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:20.752 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:20.752 rmmod nvme_tcp 00:12:20.752 rmmod nvme_fabrics 00:12:20.752 rmmod nvme_keyring 00:12:20.752 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:20.752 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:20.752 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:20.752 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 865792 ']' 00:12:20.752 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 865792 00:12:20.752 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 865792 ']' 00:12:20.752 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 865792 00:12:20.752 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:12:20.752 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.752 10:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 865792 00:12:20.752 10:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:20.752 10:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:20.752 10:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 865792' 00:12:20.753 killing process with pid 865792 00:12:20.753 10:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 865792 00:12:20.753 10:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 865792 00:12:20.753 10:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:20.753 10:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:20.753 10:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:20.753 10:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:20.753 10:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:12:20.753 10:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:20.753 10:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:12:20.753 10:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:20.753 10:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:20.753 10:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.753 10:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.753 10:39:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.138 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:22.138 00:12:22.138 real 0m34.623s 00:12:22.138 user 0m45.967s 00:12:22.138 sys 0m11.620s 00:12:22.138 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.138 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:22.138 ************************************ 00:12:22.138 END TEST nvmf_zcopy 00:12:22.138 ************************************ 00:12:22.138 10:40:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:22.138 10:40:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:22.138 10:40:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.138 10:40:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:22.138 ************************************ 00:12:22.138 START TEST nvmf_nmic 00:12:22.138 ************************************ 00:12:22.138 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:22.398 * Looking for test storage... 00:12:22.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.398 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:22.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.399 --rc genhtml_branch_coverage=1 00:12:22.399 --rc genhtml_function_coverage=1 00:12:22.399 --rc genhtml_legend=1 00:12:22.399 --rc geninfo_all_blocks=1 00:12:22.399 --rc geninfo_unexecuted_blocks=1 00:12:22.399 00:12:22.399 ' 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:22.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.399 --rc genhtml_branch_coverage=1 00:12:22.399 --rc genhtml_function_coverage=1 00:12:22.399 --rc genhtml_legend=1 00:12:22.399 --rc geninfo_all_blocks=1 00:12:22.399 --rc geninfo_unexecuted_blocks=1 00:12:22.399 00:12:22.399 ' 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:22.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.399 --rc genhtml_branch_coverage=1 00:12:22.399 --rc genhtml_function_coverage=1 00:12:22.399 --rc genhtml_legend=1 00:12:22.399 --rc geninfo_all_blocks=1 00:12:22.399 --rc geninfo_unexecuted_blocks=1 00:12:22.399 00:12:22.399 ' 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:22.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.399 --rc genhtml_branch_coverage=1 00:12:22.399 --rc genhtml_function_coverage=1 00:12:22.399 --rc genhtml_legend=1 00:12:22.399 --rc geninfo_all_blocks=1 00:12:22.399 --rc geninfo_unexecuted_blocks=1 00:12:22.399 00:12:22.399 ' 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:22.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:12:22.399 10:40:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:30.538 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:30.538 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.538 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:30.539 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:30.539 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:30.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:12:30.539 00:12:30.539 --- 10.0.0.2 ping statistics --- 00:12:30.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.539 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:30.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:12:30.539 00:12:30.539 --- 10.0.0.1 ping statistics --- 00:12:30.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.539 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=874955 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 874955 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 874955 ']' 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.539 10:40:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:30.539 [2024-11-19 10:40:09.050919] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:12:30.539 [2024-11-19 10:40:09.050983] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.539 [2024-11-19 10:40:09.149486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.539 [2024-11-19 10:40:09.203464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.539 [2024-11-19 10:40:09.203518] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.539 [2024-11-19 10:40:09.203527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.539 [2024-11-19 10:40:09.203534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.539 [2024-11-19 10:40:09.203540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.539 [2024-11-19 10:40:09.205625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.539 [2024-11-19 10:40:09.205787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.539 [2024-11-19 10:40:09.205952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.539 [2024-11-19 10:40:09.205952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:30.802 [2024-11-19 10:40:09.921912] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:30.802 Malloc0 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.802 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:30.802 [2024-11-19 10:40:09.994776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.063 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.064 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:31.064 test case1: single bdev can't be used in multiple subsystems 00:12:31.064 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:31.064 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.064 10:40:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:31.064 10:40:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.064 10:40:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:31.064 10:40:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.064 10:40:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:31.064 10:40:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.064 10:40:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:31.064 10:40:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:31.064 10:40:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.064 10:40:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:31.064 [2024-11-19 10:40:10.018861] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:31.064 [2024-11-19 10:40:10.018905] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:31.064 [2024-11-19 10:40:10.018915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.064 request: 00:12:31.064 { 00:12:31.064 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:31.064 "namespace": { 00:12:31.064 "bdev_name": "Malloc0", 00:12:31.064 "no_auto_visible": false 00:12:31.064 }, 00:12:31.064 "method": "nvmf_subsystem_add_ns", 00:12:31.064 "req_id": 1 00:12:31.064 } 00:12:31.064 Got JSON-RPC error response 00:12:31.064 response: 00:12:31.064 { 00:12:31.064 "code": -32602, 00:12:31.064 "message": "Invalid parameters" 00:12:31.064 } 00:12:31.064 10:40:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:31.064 10:40:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:31.064 10:40:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:31.064 10:40:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:31.064 Adding namespace failed - expected result. 00:12:31.064 10:40:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:31.064 test case2: host connect to nvmf target in multiple paths 00:12:31.064 10:40:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:31.064 10:40:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.064 10:40:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:31.064 [2024-11-19 10:40:10.031115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:31.064 10:40:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.064 10:40:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.447 10:40:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:34.359 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.359 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:12:34.359 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.359 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:34.359 10:40:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:12:36.273 10:40:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:36.273 10:40:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:36.273 10:40:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.273 10:40:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:36.273 10:40:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.273 10:40:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:12:36.273 10:40:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:36.273 [global] 00:12:36.273 thread=1 00:12:36.273 invalidate=1 00:12:36.273 rw=write 00:12:36.273 time_based=1 00:12:36.273 runtime=1 00:12:36.273 ioengine=libaio 00:12:36.273 direct=1 00:12:36.273 bs=4096 00:12:36.273 iodepth=1 00:12:36.273 norandommap=0 00:12:36.273 numjobs=1 00:12:36.273 00:12:36.273 verify_dump=1 00:12:36.273 verify_backlog=512 00:12:36.273 verify_state_save=0 00:12:36.273 do_verify=1 00:12:36.273 verify=crc32c-intel 00:12:36.273 [job0] 00:12:36.273 filename=/dev/nvme0n1 00:12:36.273 Could not set queue depth (nvme0n1) 00:12:36.533 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:36.533 fio-3.35 00:12:36.533 Starting 1 thread 00:12:37.474 00:12:37.474 job0: (groupid=0, jobs=1): err= 0: pid=876294: Tue Nov 19 10:40:16 2024 00:12:37.474 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:37.474 slat (nsec): min=7147, max=60201, avg=25378.56, stdev=4540.63 00:12:37.474 clat (usec): min=654, max=1282, avg=1060.53, stdev=93.77 00:12:37.474 lat (usec): min=679, max=1307, avg=1085.91, stdev=94.76 00:12:37.474 clat percentiles (usec): 00:12:37.474 | 1.00th=[ 783], 5.00th=[ 881], 10.00th=[ 938], 20.00th=[ 996], 00:12:37.474 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:12:37.474 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1188], 00:12:37.474 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1287], 99.95th=[ 1287], 00:12:37.474 | 99.99th=[ 1287] 00:12:37.474 write: IOPS=757, BW=3029KiB/s (3102kB/s)(3032KiB/1001msec); 0 zone resets 00:12:37.474 slat (nsec): min=9686, max=67309, avg=29167.18, stdev=9440.31 00:12:37.475 clat (usec): min=121, max=755, avg=544.33, stdev=96.78 00:12:37.475 lat (usec): min=132, max=787, avg=573.50, stdev=100.69 00:12:37.475 clat percentiles (usec): 00:12:37.475 | 1.00th=[ 237], 5.00th=[ 351], 10.00th=[ 429], 20.00th=[ 461], 00:12:37.475 | 30.00th=[ 519], 40.00th=[ 529], 50.00th=[ 545], 60.00th=[ 562], 00:12:37.475 | 70.00th=[ 603], 80.00th=[ 635], 90.00th=[ 660], 95.00th=[ 676], 00:12:37.475 | 99.00th=[ 742], 99.50th=[ 750], 99.90th=[ 758], 99.95th=[ 758], 00:12:37.475 | 99.99th=[ 758] 00:12:37.475 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:12:37.475 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:37.475 lat (usec) : 250=0.63%, 500=14.02%, 750=45.20%, 1000=8.35% 00:12:37.475 lat (msec) : 2=31.81% 00:12:37.475 cpu : usr=2.20%, sys=3.30%, ctx=1270, majf=0, minf=1 00:12:37.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:37.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.475 issued rwts: total=512,758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:37.475 00:12:37.475 Run status group 0 (all jobs): 00:12:37.475 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:12:37.475 WRITE: bw=3029KiB/s (3102kB/s), 3029KiB/s-3029KiB/s (3102kB/s-3102kB/s), io=3032KiB (3105kB), run=1001-1001msec 00:12:37.475 00:12:37.475 Disk stats (read/write): 00:12:37.475 nvme0n1: ios=562/584, merge=0/0, ticks=565/301, in_queue=866, util=92.69% 00:12:37.475 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.736 rmmod nvme_tcp 00:12:37.736 rmmod nvme_fabrics 00:12:37.736 rmmod nvme_keyring 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 874955 ']' 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 874955 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 874955 ']' 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 874955 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.736 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 874955 00:12:37.998 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.998 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.998 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 874955' 00:12:37.998 killing process with pid 874955 00:12:37.998 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 874955 00:12:37.998 10:40:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 874955 00:12:37.998 10:40:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:37.998 10:40:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:37.998 10:40:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:37.998 10:40:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:37.998 10:40:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:12:37.998 10:40:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:37.998 10:40:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:12:37.998 10:40:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:37.998 10:40:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:37.998 10:40:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.998 10:40:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.998 10:40:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:40.542 00:12:40.542 real 0m17.904s 00:12:40.542 user 0m48.272s 00:12:40.542 sys 0m6.589s 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:40.542 ************************************ 00:12:40.542 END TEST nvmf_nmic 00:12:40.542 ************************************ 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:40.542 ************************************ 00:12:40.542 START TEST nvmf_fio_target 00:12:40.542 ************************************ 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:40.542 * Looking for test storage... 00:12:40.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.542 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:40.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.543 --rc genhtml_branch_coverage=1 00:12:40.543 --rc genhtml_function_coverage=1 00:12:40.543 --rc genhtml_legend=1 00:12:40.543 --rc geninfo_all_blocks=1 00:12:40.543 --rc geninfo_unexecuted_blocks=1 00:12:40.543 00:12:40.543 ' 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:40.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.543 --rc genhtml_branch_coverage=1 00:12:40.543 --rc genhtml_function_coverage=1 00:12:40.543 --rc genhtml_legend=1 00:12:40.543 --rc geninfo_all_blocks=1 00:12:40.543 --rc geninfo_unexecuted_blocks=1 00:12:40.543 00:12:40.543 ' 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:40.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.543 --rc genhtml_branch_coverage=1 00:12:40.543 --rc genhtml_function_coverage=1 00:12:40.543 --rc genhtml_legend=1 00:12:40.543 --rc geninfo_all_blocks=1 00:12:40.543 --rc geninfo_unexecuted_blocks=1 00:12:40.543 00:12:40.543 ' 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:40.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.543 --rc genhtml_branch_coverage=1 00:12:40.543 --rc genhtml_function_coverage=1 00:12:40.543 --rc genhtml_legend=1 00:12:40.543 --rc geninfo_all_blocks=1 00:12:40.543 --rc geninfo_unexecuted_blocks=1 00:12:40.543 00:12:40.543 ' 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:40.543 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:40.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:12:40.544 10:40:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.690 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.690 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:12:48.690 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:48.690 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:48.690 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:48.690 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:48.690 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:48.690 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:12:48.690 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:48.690 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:12:48.690 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:12:48.690 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:12:48.690 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:12:48.690 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:12:48.690 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:12:48.690 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.690 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.690 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.690 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:48.691 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:48.691 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:48.691 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:48.691 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:48.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:12:48.691 00:12:48.691 --- 10.0.0.2 ping statistics --- 00:12:48.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.691 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:12:48.691 00:12:48.691 --- 10.0.0.1 ping statistics --- 00:12:48.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.691 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.691 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:48.692 10:40:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:48.692 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:48.692 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:48.692 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:48.692 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.692 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=880944 00:12:48.692 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 880944 00:12:48.692 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.692 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 880944 ']' 00:12:48.692 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.692 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:48.692 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.692 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:48.692 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.692 [2024-11-19 10:40:27.116059] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:12:48.692 [2024-11-19 10:40:27.116125] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.692 [2024-11-19 10:40:27.215054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.692 [2024-11-19 10:40:27.267207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.692 [2024-11-19 10:40:27.267260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.692 [2024-11-19 10:40:27.267269] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.692 [2024-11-19 10:40:27.267276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.692 [2024-11-19 10:40:27.267282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.692 [2024-11-19 10:40:27.269652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.692 [2024-11-19 10:40:27.269812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.692 [2024-11-19 10:40:27.269978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.692 [2024-11-19 10:40:27.269978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.953 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.953 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:12:48.953 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:48.953 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:48.953 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.953 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.954 10:40:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:49.215 [2024-11-19 10:40:28.154101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.215 10:40:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:49.476 10:40:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:49.476 10:40:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:49.476 10:40:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:49.476 10:40:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:49.737 10:40:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:49.737 10:40:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:49.999 10:40:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:49.999 10:40:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:50.258 10:40:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:50.518 10:40:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:50.518 10:40:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:50.518 10:40:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:50.518 10:40:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:50.778 10:40:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:50.778 10:40:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:51.039 10:40:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:51.039 10:40:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:51.039 10:40:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:51.299 10:40:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:51.299 10:40:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.559 10:40:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.559 [2024-11-19 10:40:30.716363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.559 10:40:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:51.819 10:40:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:52.079 10:40:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.989 10:40:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:53.989 10:40:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:12:53.989 10:40:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.989 10:40:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:12:53.989 10:40:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:12:53.989 10:40:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:12:55.901 10:40:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:55.901 10:40:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:55.901 10:40:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.901 10:40:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:12:55.901 10:40:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.901 10:40:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:12:55.901 10:40:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:55.901 [global] 00:12:55.901 thread=1 00:12:55.901 invalidate=1 00:12:55.901 rw=write 00:12:55.901 time_based=1 00:12:55.901 runtime=1 00:12:55.901 ioengine=libaio 00:12:55.901 direct=1 00:12:55.901 bs=4096 00:12:55.901 iodepth=1 00:12:55.901 norandommap=0 00:12:55.901 numjobs=1 00:12:55.901 00:12:55.901 verify_dump=1 00:12:55.901 verify_backlog=512 00:12:55.901 verify_state_save=0 00:12:55.901 do_verify=1 00:12:55.901 verify=crc32c-intel 00:12:55.901 [job0] 00:12:55.901 filename=/dev/nvme0n1 00:12:55.901 [job1] 00:12:55.901 filename=/dev/nvme0n2 00:12:55.901 [job2] 00:12:55.901 filename=/dev/nvme0n3 00:12:55.901 [job3] 00:12:55.901 filename=/dev/nvme0n4 00:12:55.901 Could not set queue depth (nvme0n1) 00:12:55.901 Could not set queue depth (nvme0n2) 00:12:55.901 Could not set queue depth (nvme0n3) 00:12:55.901 Could not set queue depth (nvme0n4) 00:12:56.162 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:56.162 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:56.162 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:56.162 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:56.162 fio-3.35 00:12:56.162 Starting 4 threads 00:12:57.547 00:12:57.547 job0: (groupid=0, jobs=1): err= 0: pid=882755: Tue Nov 19 10:40:36 2024 00:12:57.547 read: IOPS=17, BW=69.2KiB/s (70.9kB/s)(72.0KiB/1040msec) 00:12:57.547 slat (nsec): min=26306, max=26893, avg=26522.39, stdev=136.67 00:12:57.547 clat (usec): min=1196, max=42183, avg=39391.06, stdev=9542.87 00:12:57.547 lat (usec): min=1223, max=42210, avg=39417.59, stdev=9542.87 00:12:57.547 clat percentiles (usec): 00:12:57.547 | 1.00th=[ 1205], 5.00th=[ 1205], 10.00th=[41157], 20.00th=[41157], 00:12:57.547 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:12:57.547 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:57.547 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:57.547 | 99.99th=[42206] 00:12:57.547 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:12:57.547 slat (nsec): min=10485, max=54343, avg=31938.69, stdev=10079.27 00:12:57.547 clat (usec): min=172, max=1254, avg=603.84, stdev=118.20 00:12:57.547 lat (usec): min=182, max=1288, avg=635.78, stdev=121.53 00:12:57.547 clat percentiles (usec): 00:12:57.547 | 1.00th=[ 306], 5.00th=[ 392], 10.00th=[ 457], 20.00th=[ 498], 00:12:57.547 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 635], 00:12:57.547 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 766], 00:12:57.548 | 99.00th=[ 832], 99.50th=[ 881], 99.90th=[ 1254], 99.95th=[ 1254], 00:12:57.548 | 99.99th=[ 1254] 00:12:57.548 bw ( KiB/s): min= 4096, max= 4096, per=45.36%, avg=4096.00, stdev= 0.00, samples=1 00:12:57.548 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:57.548 lat (usec) : 250=0.19%, 500=19.43%, 750=69.81%, 1000=6.79% 00:12:57.548 lat (msec) : 2=0.57%, 50=3.21% 00:12:57.548 cpu : usr=0.87%, sys=1.44%, ctx=532, majf=0, minf=1 00:12:57.548 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:57.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.548 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.548 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:57.548 job1: (groupid=0, jobs=1): err= 0: pid=882773: Tue Nov 19 10:40:36 2024 00:12:57.548 read: IOPS=281, BW=1125KiB/s (1152kB/s)(1168KiB/1038msec) 00:12:57.548 slat (nsec): min=5466, max=71387, avg=25203.52, stdev=8352.71 00:12:57.548 clat (usec): min=445, max=41385, avg=2679.93, stdev=8624.15 00:12:57.548 lat (usec): min=472, max=41393, avg=2705.14, stdev=8624.13 00:12:57.548 clat percentiles (usec): 00:12:57.548 | 1.00th=[ 478], 5.00th=[ 578], 10.00th=[ 644], 20.00th=[ 668], 00:12:57.548 | 30.00th=[ 709], 40.00th=[ 758], 50.00th=[ 783], 60.00th=[ 799], 00:12:57.548 | 70.00th=[ 816], 80.00th=[ 824], 90.00th=[ 865], 95.00th=[ 930], 00:12:57.548 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:57.548 | 99.99th=[41157] 00:12:57.548 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:12:57.548 slat (nsec): min=9913, max=53573, avg=30715.37, stdev=10213.54 00:12:57.548 clat (usec): min=140, max=660, avg=439.83, stdev=82.60 00:12:57.548 lat (usec): min=150, max=695, avg=470.55, stdev=86.93 00:12:57.548 clat percentiles (usec): 00:12:57.548 | 1.00th=[ 235], 5.00th=[ 293], 10.00th=[ 326], 20.00th=[ 367], 00:12:57.548 | 30.00th=[ 404], 40.00th=[ 429], 50.00th=[ 445], 60.00th=[ 465], 00:12:57.548 | 70.00th=[ 482], 80.00th=[ 510], 90.00th=[ 545], 95.00th=[ 562], 00:12:57.548 | 99.00th=[ 619], 99.50th=[ 627], 99.90th=[ 660], 99.95th=[ 660], 00:12:57.548 | 99.99th=[ 660] 00:12:57.548 bw ( KiB/s): min= 4096, max= 4096, per=45.36%, avg=4096.00, stdev= 0.00, samples=1 00:12:57.548 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:57.548 lat (usec) : 250=1.00%, 500=48.13%, 750=28.23%, 1000=20.90% 00:12:57.548 lat (msec) : 50=1.74% 00:12:57.548 cpu : usr=0.77%, sys=2.70%, ctx=805, majf=0, minf=1 00:12:57.548 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:57.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.548 issued rwts: total=292,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.548 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:57.548 job2: (groupid=0, jobs=1): err= 0: pid=882791: Tue Nov 19 10:40:36 2024 00:12:57.548 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:57.548 slat (nsec): min=6970, max=57318, avg=28001.63, stdev=4191.68 00:12:57.548 clat (usec): min=456, max=1307, avg=935.39, stdev=97.79 00:12:57.548 lat (usec): min=484, max=1335, avg=963.39, stdev=98.02 00:12:57.548 clat percentiles (usec): 00:12:57.548 | 1.00th=[ 668], 5.00th=[ 758], 10.00th=[ 807], 20.00th=[ 865], 00:12:57.548 | 30.00th=[ 906], 40.00th=[ 938], 50.00th=[ 955], 60.00th=[ 963], 00:12:57.548 | 70.00th=[ 971], 80.00th=[ 996], 90.00th=[ 1037], 95.00th=[ 1090], 00:12:57.548 | 99.00th=[ 1172], 99.50th=[ 1221], 99.90th=[ 1303], 99.95th=[ 1303], 00:12:57.548 | 99.99th=[ 1303] 00:12:57.548 write: IOPS=811, BW=3245KiB/s (3323kB/s)(3248KiB/1001msec); 0 zone resets 00:12:57.548 slat (nsec): min=9111, max=67722, avg=31418.52, stdev=11229.23 00:12:57.548 clat (usec): min=135, max=998, avg=580.46, stdev=137.20 00:12:57.548 lat (usec): min=170, max=1038, avg=611.87, stdev=141.69 00:12:57.548 clat percentiles (usec): 00:12:57.548 | 1.00th=[ 247], 5.00th=[ 359], 10.00th=[ 424], 20.00th=[ 469], 00:12:57.548 | 30.00th=[ 519], 40.00th=[ 537], 50.00th=[ 570], 60.00th=[ 611], 00:12:57.548 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 758], 95.00th=[ 832], 00:12:57.548 | 99.00th=[ 914], 99.50th=[ 938], 99.90th=[ 996], 99.95th=[ 996], 00:12:57.548 | 99.99th=[ 996] 00:12:57.548 bw ( KiB/s): min= 4096, max= 4096, per=45.36%, avg=4096.00, stdev= 0.00, samples=1 00:12:57.548 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:57.548 lat (usec) : 250=0.68%, 500=14.43%, 750=41.16%, 1000=36.78% 00:12:57.548 lat (msec) : 2=6.95% 00:12:57.548 cpu : usr=2.70%, sys=5.30%, ctx=1325, majf=0, minf=1 00:12:57.548 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:57.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.548 issued rwts: total=512,812,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.548 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:57.548 job3: (groupid=0, jobs=1): err= 0: pid=882798: Tue Nov 19 10:40:36 2024 00:12:57.548 read: IOPS=490, BW=1963KiB/s (2010kB/s)(2036KiB/1037msec) 00:12:57.548 slat (nsec): min=4176, max=58803, avg=17374.36, stdev=8495.68 00:12:57.548 clat (usec): min=171, max=41288, avg=1696.05, stdev=6618.89 00:12:57.548 lat (usec): min=176, max=41294, avg=1713.42, stdev=6620.39 00:12:57.548 clat percentiles (usec): 00:12:57.548 | 1.00th=[ 347], 5.00th=[ 388], 10.00th=[ 474], 20.00th=[ 537], 00:12:57.548 | 30.00th=[ 578], 40.00th=[ 594], 50.00th=[ 611], 60.00th=[ 619], 00:12:57.548 | 70.00th=[ 635], 80.00th=[ 652], 90.00th=[ 676], 95.00th=[ 693], 00:12:57.548 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:57.548 | 99.99th=[41157] 00:12:57.548 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:12:57.548 slat (usec): min=5, max=1194, avg=19.28, stdev=68.84 00:12:57.548 clat (usec): min=94, max=446, avg=291.48, stdev=54.15 00:12:57.549 lat (usec): min=101, max=1420, avg=310.76, stdev=86.47 00:12:57.549 clat percentiles (usec): 00:12:57.549 | 1.00th=[ 122], 5.00th=[ 194], 10.00th=[ 223], 20.00th=[ 265], 00:12:57.549 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:12:57.549 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 355], 95.00th=[ 392], 00:12:57.549 | 99.00th=[ 437], 99.50th=[ 441], 99.90th=[ 445], 99.95th=[ 445], 00:12:57.549 | 99.99th=[ 445] 00:12:57.549 bw ( KiB/s): min= 4096, max= 4096, per=45.36%, avg=4096.00, stdev= 0.00, samples=1 00:12:57.549 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:57.549 lat (usec) : 100=0.20%, 250=8.33%, 500=47.89%, 750=42.12%, 1000=0.10% 00:12:57.549 lat (msec) : 50=1.37% 00:12:57.549 cpu : usr=0.48%, sys=1.93%, ctx=1024, majf=0, minf=1 00:12:57.549 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:57.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.549 issued rwts: total=509,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.549 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:57.549 00:12:57.549 Run status group 0 (all jobs): 00:12:57.549 READ: bw=5119KiB/s (5242kB/s), 69.2KiB/s-2046KiB/s (70.9kB/s-2095kB/s), io=5324KiB (5452kB), run=1001-1040msec 00:12:57.549 WRITE: bw=9031KiB/s (9248kB/s), 1969KiB/s-3245KiB/s (2016kB/s-3323kB/s), io=9392KiB (9617kB), run=1001-1040msec 00:12:57.549 00:12:57.549 Disk stats (read/write): 00:12:57.549 nvme0n1: ios=55/512, merge=0/0, ticks=552/295, in_queue=847, util=86.87% 00:12:57.549 nvme0n2: ios=335/512, merge=0/0, ticks=724/225, in_queue=949, util=90.91% 00:12:57.549 nvme0n3: ios=561/512, merge=0/0, ticks=529/249, in_queue=778, util=94.93% 00:12:57.549 nvme0n4: ios=552/512, merge=0/0, ticks=761/144, in_queue=905, util=97.32% 00:12:57.549 10:40:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:57.549 [global] 00:12:57.549 thread=1 00:12:57.549 invalidate=1 00:12:57.549 rw=randwrite 00:12:57.549 time_based=1 00:12:57.549 runtime=1 00:12:57.549 ioengine=libaio 00:12:57.549 direct=1 00:12:57.549 bs=4096 00:12:57.549 iodepth=1 00:12:57.549 norandommap=0 00:12:57.549 numjobs=1 00:12:57.549 00:12:57.549 verify_dump=1 00:12:57.549 verify_backlog=512 00:12:57.549 verify_state_save=0 00:12:57.549 do_verify=1 00:12:57.549 verify=crc32c-intel 00:12:57.549 [job0] 00:12:57.549 filename=/dev/nvme0n1 00:12:57.549 [job1] 00:12:57.549 filename=/dev/nvme0n2 00:12:57.549 [job2] 00:12:57.549 filename=/dev/nvme0n3 00:12:57.549 [job3] 00:12:57.549 filename=/dev/nvme0n4 00:12:57.549 Could not set queue depth (nvme0n1) 00:12:57.549 Could not set queue depth (nvme0n2) 00:12:57.549 Could not set queue depth (nvme0n3) 00:12:57.549 Could not set queue depth (nvme0n4) 00:12:57.809 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:57.809 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:57.809 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:57.809 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:57.809 fio-3.35 00:12:57.809 Starting 4 threads 00:12:59.224 00:12:59.224 job0: (groupid=0, jobs=1): err= 0: pid=883262: Tue Nov 19 10:40:38 2024 00:12:59.224 read: IOPS=16, BW=65.4KiB/s (67.0kB/s)(68.0KiB/1040msec) 00:12:59.224 slat (nsec): min=25546, max=26174, avg=25778.35, stdev=163.69 00:12:59.224 clat (usec): min=41134, max=42253, avg=41932.07, stdev=220.00 00:12:59.224 lat (usec): min=41160, max=42279, avg=41957.85, stdev=220.00 00:12:59.224 clat percentiles (usec): 00:12:59.224 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:12:59.224 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:12:59.224 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:59.224 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:59.224 | 99.99th=[42206] 00:12:59.224 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:12:59.224 slat (nsec): min=9537, max=56633, avg=30040.45, stdev=8419.73 00:12:59.224 clat (usec): min=234, max=925, avg=598.53, stdev=124.57 00:12:59.224 lat (usec): min=267, max=958, avg=628.58, stdev=127.36 00:12:59.224 clat percentiles (usec): 00:12:59.224 | 1.00th=[ 277], 5.00th=[ 379], 10.00th=[ 441], 20.00th=[ 494], 00:12:59.224 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:12:59.224 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 799], 00:12:59.224 | 99.00th=[ 840], 99.50th=[ 873], 99.90th=[ 922], 99.95th=[ 922], 00:12:59.224 | 99.99th=[ 922] 00:12:59.224 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:12:59.224 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:59.224 lat (usec) : 250=0.19%, 500=20.60%, 750=65.97%, 1000=10.02% 00:12:59.224 lat (msec) : 50=3.21% 00:12:59.224 cpu : usr=0.48%, sys=1.73%, ctx=534, majf=0, minf=1 00:12:59.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.224 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.224 job1: (groupid=0, jobs=1): err= 0: pid=883273: Tue Nov 19 10:40:38 2024 00:12:59.224 read: IOPS=683, BW=2733KiB/s (2799kB/s)(2736KiB/1001msec) 00:12:59.224 slat (nsec): min=7072, max=60064, avg=24014.04, stdev=7500.82 00:12:59.224 clat (usec): min=467, max=982, avg=749.04, stdev=75.12 00:12:59.224 lat (usec): min=475, max=1009, avg=773.06, stdev=76.31 00:12:59.224 clat percentiles (usec): 00:12:59.224 | 1.00th=[ 545], 5.00th=[ 627], 10.00th=[ 660], 20.00th=[ 685], 00:12:59.224 | 30.00th=[ 709], 40.00th=[ 734], 50.00th=[ 758], 60.00th=[ 775], 00:12:59.224 | 70.00th=[ 791], 80.00th=[ 807], 90.00th=[ 840], 95.00th=[ 873], 00:12:59.224 | 99.00th=[ 914], 99.50th=[ 947], 99.90th=[ 979], 99.95th=[ 979], 00:12:59.224 | 99.99th=[ 979] 00:12:59.224 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:12:59.224 slat (nsec): min=9689, max=65118, avg=27404.92, stdev=10622.10 00:12:59.224 clat (usec): min=128, max=823, avg=420.66, stdev=79.04 00:12:59.224 lat (usec): min=139, max=857, avg=448.07, stdev=84.54 00:12:59.224 clat percentiles (usec): 00:12:59.224 | 1.00th=[ 262], 5.00th=[ 285], 10.00th=[ 318], 20.00th=[ 343], 00:12:59.224 | 30.00th=[ 371], 40.00th=[ 416], 50.00th=[ 437], 60.00th=[ 449], 00:12:59.224 | 70.00th=[ 465], 80.00th=[ 482], 90.00th=[ 502], 95.00th=[ 537], 00:12:59.224 | 99.00th=[ 627], 99.50th=[ 644], 99.90th=[ 668], 99.95th=[ 824], 00:12:59.224 | 99.99th=[ 824] 00:12:59.224 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:12:59.224 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:59.224 lat (usec) : 250=0.29%, 500=53.34%, 750=24.65%, 1000=21.72% 00:12:59.224 cpu : usr=2.90%, sys=4.20%, ctx=1712, majf=0, minf=1 00:12:59.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.224 issued rwts: total=684,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.224 job2: (groupid=0, jobs=1): err= 0: pid=883291: Tue Nov 19 10:40:38 2024 00:12:59.224 read: IOPS=478, BW=1913KiB/s (1958kB/s)(1924KiB/1006msec) 00:12:59.224 slat (nsec): min=7007, max=48554, avg=27918.50, stdev=4251.92 00:12:59.224 clat (usec): min=599, max=41939, avg=1429.54, stdev=4501.38 00:12:59.224 lat (usec): min=627, max=41967, avg=1457.46, stdev=4501.40 00:12:59.224 clat percentiles (usec): 00:12:59.224 | 1.00th=[ 619], 5.00th=[ 725], 10.00th=[ 775], 20.00th=[ 832], 00:12:59.224 | 30.00th=[ 873], 40.00th=[ 914], 50.00th=[ 947], 60.00th=[ 971], 00:12:59.224 | 70.00th=[ 996], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[ 1074], 00:12:59.224 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:12:59.224 | 99.99th=[41681] 00:12:59.224 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:12:59.224 slat (nsec): min=9467, max=52183, avg=24811.35, stdev=11135.60 00:12:59.224 clat (usec): min=248, max=835, avg=553.28, stdev=118.99 00:12:59.224 lat (usec): min=263, max=873, avg=578.09, stdev=124.09 00:12:59.224 clat percentiles (usec): 00:12:59.224 | 1.00th=[ 277], 5.00th=[ 355], 10.00th=[ 383], 20.00th=[ 457], 00:12:59.224 | 30.00th=[ 486], 40.00th=[ 519], 50.00th=[ 553], 60.00th=[ 586], 00:12:59.224 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 709], 95.00th=[ 742], 00:12:59.224 | 99.00th=[ 783], 99.50th=[ 824], 99.90th=[ 840], 99.95th=[ 840], 00:12:59.224 | 99.99th=[ 840] 00:12:59.224 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:12:59.224 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:59.224 lat (usec) : 250=0.10%, 500=17.22%, 750=35.65%, 1000=34.54% 00:12:59.224 lat (msec) : 2=11.88%, 50=0.60% 00:12:59.224 cpu : usr=2.39%, sys=2.59%, ctx=994, majf=0, minf=1 00:12:59.225 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.225 issued rwts: total=481,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.225 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.225 job3: (groupid=0, jobs=1): err= 0: pid=883297: Tue Nov 19 10:40:38 2024 00:12:59.225 read: IOPS=17, BW=69.9KiB/s (71.6kB/s)(72.0KiB/1030msec) 00:12:59.225 slat (nsec): min=24412, max=25504, avg=24694.61, stdev=246.91 00:12:59.225 clat (usec): min=1065, max=42046, avg=39575.72, stdev=9615.53 00:12:59.225 lat (usec): min=1089, max=42071, avg=39600.42, stdev=9615.52 00:12:59.225 clat percentiles (usec): 00:12:59.225 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[41157], 20.00th=[41681], 00:12:59.225 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:12:59.225 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:59.225 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:59.225 | 99.99th=[42206] 00:12:59.225 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:12:59.225 slat (nsec): min=9540, max=63743, avg=28104.75, stdev=8045.95 00:12:59.225 clat (usec): min=282, max=878, avg=583.23, stdev=116.24 00:12:59.225 lat (usec): min=293, max=909, avg=611.33, stdev=119.24 00:12:59.225 clat percentiles (usec): 00:12:59.225 | 1.00th=[ 338], 5.00th=[ 367], 10.00th=[ 429], 20.00th=[ 478], 00:12:59.225 | 30.00th=[ 515], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 619], 00:12:59.225 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 725], 95.00th=[ 750], 00:12:59.225 | 99.00th=[ 824], 99.50th=[ 840], 99.90th=[ 881], 99.95th=[ 881], 00:12:59.225 | 99.99th=[ 881] 00:12:59.225 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:12:59.225 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:59.225 lat (usec) : 500=26.23%, 750=65.47%, 1000=4.91% 00:12:59.225 lat (msec) : 2=0.19%, 50=3.21% 00:12:59.225 cpu : usr=0.58%, sys=1.55%, ctx=531, majf=0, minf=2 00:12:59.225 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.225 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.225 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.225 00:12:59.225 Run status group 0 (all jobs): 00:12:59.225 READ: bw=4615KiB/s (4726kB/s), 65.4KiB/s-2733KiB/s (67.0kB/s-2799kB/s), io=4800KiB (4915kB), run=1001-1040msec 00:12:59.225 WRITE: bw=9846KiB/s (10.1MB/s), 1969KiB/s-4092KiB/s (2016kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1040msec 00:12:59.225 00:12:59.225 Disk stats (read/write): 00:12:59.225 nvme0n1: ios=38/512, merge=0/0, ticks=787/296, in_queue=1083, util=100.00% 00:12:59.225 nvme0n2: ios=537/967, merge=0/0, ticks=972/399, in_queue=1371, util=96.53% 00:12:59.225 nvme0n3: ios=521/512, merge=0/0, ticks=795/278, in_queue=1073, util=98.84% 00:12:59.225 nvme0n4: ios=53/512, merge=0/0, ticks=569/290, in_queue=859, util=91.68% 00:12:59.225 10:40:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:59.225 [global] 00:12:59.225 thread=1 00:12:59.225 invalidate=1 00:12:59.225 rw=write 00:12:59.225 time_based=1 00:12:59.225 runtime=1 00:12:59.225 ioengine=libaio 00:12:59.225 direct=1 00:12:59.225 bs=4096 00:12:59.225 iodepth=128 00:12:59.225 norandommap=0 00:12:59.225 numjobs=1 00:12:59.225 00:12:59.225 verify_dump=1 00:12:59.225 verify_backlog=512 00:12:59.225 verify_state_save=0 00:12:59.225 do_verify=1 00:12:59.225 verify=crc32c-intel 00:12:59.225 [job0] 00:12:59.225 filename=/dev/nvme0n1 00:12:59.225 [job1] 00:12:59.225 filename=/dev/nvme0n2 00:12:59.225 [job2] 00:12:59.225 filename=/dev/nvme0n3 00:12:59.225 [job3] 00:12:59.225 filename=/dev/nvme0n4 00:12:59.225 Could not set queue depth (nvme0n1) 00:12:59.225 Could not set queue depth (nvme0n2) 00:12:59.225 Could not set queue depth (nvme0n3) 00:12:59.225 Could not set queue depth (nvme0n4) 00:12:59.489 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:59.489 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:59.489 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:59.489 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:59.489 fio-3.35 00:12:59.489 Starting 4 threads 00:13:00.932 00:13:00.932 job0: (groupid=0, jobs=1): err= 0: pid=883751: Tue Nov 19 10:40:39 2024 00:13:00.932 read: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec) 00:13:00.932 slat (nsec): min=922, max=6765.1k, avg=72852.85, stdev=492303.97 00:13:00.932 clat (usec): min=5028, max=27910, avg=9784.73, stdev=3432.27 00:13:00.932 lat (usec): min=5033, max=27912, avg=9857.58, stdev=3476.80 00:13:00.932 clat percentiles (usec): 00:13:00.932 | 1.00th=[ 5866], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7308], 00:13:00.932 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8586], 60.00th=[ 9110], 00:13:00.932 | 70.00th=[10159], 80.00th=[11863], 90.00th=[15139], 95.00th=[17171], 00:13:00.932 | 99.00th=[21103], 99.50th=[23462], 99.90th=[25560], 99.95th=[27919], 00:13:00.932 | 99.99th=[27919] 00:13:00.932 write: IOPS=7046, BW=27.5MiB/s (28.9MB/s)(27.7MiB/1005msec); 0 zone resets 00:13:00.932 slat (nsec): min=1640, max=6403.1k, avg=64992.58, stdev=422749.92 00:13:00.932 clat (usec): min=1401, max=27910, avg=8798.70, stdev=4891.11 00:13:00.932 lat (usec): min=1413, max=27919, avg=8863.69, stdev=4925.99 00:13:00.932 clat percentiles (usec): 00:13:00.932 | 1.00th=[ 3589], 5.00th=[ 3851], 10.00th=[ 4686], 20.00th=[ 5407], 00:13:00.932 | 30.00th=[ 5932], 40.00th=[ 6456], 50.00th=[ 7111], 60.00th=[ 8029], 00:13:00.932 | 70.00th=[ 9241], 80.00th=[10945], 90.00th=[16712], 95.00th=[21103], 00:13:00.932 | 99.00th=[23987], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:13:00.932 | 99.99th=[27919] 00:13:00.933 bw ( KiB/s): min=24624, max=31016, per=31.77%, avg=27820.00, stdev=4519.83, samples=2 00:13:00.933 iops : min= 6156, max= 7754, avg=6955.00, stdev=1129.96, samples=2 00:13:00.933 lat (msec) : 2=0.10%, 4=3.36%, 10=69.22%, 20=23.48%, 50=3.84% 00:13:00.933 cpu : usr=5.48%, sys=7.37%, ctx=380, majf=0, minf=1 00:13:00.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:13:00.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:00.933 issued rwts: total=6656,7082,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:00.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:00.933 job1: (groupid=0, jobs=1): err= 0: pid=883765: Tue Nov 19 10:40:39 2024 00:13:00.933 read: IOPS=3367, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1007msec) 00:13:00.933 slat (nsec): min=925, max=18788k, avg=134049.23, stdev=1038762.70 00:13:00.933 clat (usec): min=1085, max=62394, avg=18817.55, stdev=14282.36 00:13:00.933 lat (usec): min=3914, max=62403, avg=18951.60, stdev=14344.90 00:13:00.933 clat percentiles (usec): 00:13:00.933 | 1.00th=[ 4490], 5.00th=[ 6063], 10.00th=[ 6849], 20.00th=[ 7308], 00:13:00.933 | 30.00th=[ 7898], 40.00th=[ 9372], 50.00th=[10814], 60.00th=[20317], 00:13:00.933 | 70.00th=[25297], 80.00th=[30016], 90.00th=[41681], 95.00th=[48497], 00:13:00.933 | 99.00th=[56361], 99.50th=[62129], 99.90th=[62129], 99.95th=[62653], 00:13:00.933 | 99.99th=[62653] 00:13:00.933 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:13:00.933 slat (nsec): min=1618, max=16584k, avg=142644.29, stdev=1022829.66 00:13:00.933 clat (usec): min=754, max=41847, avg=17651.98, stdev=12756.79 00:13:00.933 lat (usec): min=764, max=41856, avg=17794.63, stdev=12833.82 00:13:00.933 clat percentiles (usec): 00:13:00.933 | 1.00th=[ 1532], 5.00th=[ 3818], 10.00th=[ 4621], 20.00th=[ 7111], 00:13:00.933 | 30.00th=[ 7635], 40.00th=[ 8586], 50.00th=[10683], 60.00th=[16057], 00:13:00.933 | 70.00th=[29230], 80.00th=[33817], 90.00th=[36439], 95.00th=[38011], 00:13:00.933 | 99.00th=[40109], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:00.933 | 99.99th=[41681] 00:13:00.933 bw ( KiB/s): min= 8192, max=20480, per=16.37%, avg=14336.00, stdev=8688.93, samples=2 00:13:00.933 iops : min= 2048, max= 5120, avg=3584.00, stdev=2172.23, samples=2 00:13:00.933 lat (usec) : 1000=0.03% 00:13:00.933 lat (msec) : 2=0.87%, 4=2.31%, 10=43.07%, 20=15.01%, 50=36.49% 00:13:00.933 lat (msec) : 100=2.22% 00:13:00.933 cpu : usr=3.08%, sys=3.48%, ctx=242, majf=0, minf=1 00:13:00.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:00.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:00.933 issued rwts: total=3391,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:00.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:00.933 job2: (groupid=0, jobs=1): err= 0: pid=883781: Tue Nov 19 10:40:39 2024 00:13:00.933 read: IOPS=9179, BW=35.9MiB/s (37.6MB/s)(36.0MiB/1004msec) 00:13:00.933 slat (nsec): min=1017, max=10239k, avg=52062.04, stdev=384473.36 00:13:00.933 clat (usec): min=2092, max=18261, avg=7372.36, stdev=1902.79 00:13:00.933 lat (usec): min=2104, max=20935, avg=7424.42, stdev=1924.37 00:13:00.933 clat percentiles (usec): 00:13:00.933 | 1.00th=[ 3326], 5.00th=[ 5014], 10.00th=[ 5473], 20.00th=[ 5997], 00:13:00.933 | 30.00th=[ 6456], 40.00th=[ 6718], 50.00th=[ 7111], 60.00th=[ 7504], 00:13:00.933 | 70.00th=[ 8029], 80.00th=[ 8586], 90.00th=[ 9372], 95.00th=[10552], 00:13:00.933 | 99.00th=[13960], 99.50th=[17171], 99.90th=[17171], 99.95th=[17171], 00:13:00.933 | 99.99th=[18220] 00:13:00.933 write: IOPS=9232, BW=36.1MiB/s (37.8MB/s)(36.2MiB/1004msec); 0 zone resets 00:13:00.933 slat (nsec): min=1651, max=10833k, avg=41366.83, stdev=358294.16 00:13:00.933 clat (usec): min=538, max=44676, avg=6311.07, stdev=3817.70 00:13:00.933 lat (usec): min=573, max=44686, avg=6352.44, stdev=3829.92 00:13:00.933 clat percentiles (usec): 00:13:00.933 | 1.00th=[ 1254], 5.00th=[ 2245], 10.00th=[ 3195], 20.00th=[ 4228], 00:13:00.933 | 30.00th=[ 4817], 40.00th=[ 5538], 50.00th=[ 6128], 60.00th=[ 6194], 00:13:00.933 | 70.00th=[ 6456], 80.00th=[ 7504], 90.00th=[ 9110], 95.00th=[10945], 00:13:00.933 | 99.00th=[21103], 99.50th=[34866], 99.90th=[42730], 99.95th=[44303], 00:13:00.933 | 99.99th=[44827] 00:13:00.933 bw ( KiB/s): min=34056, max=39672, per=42.09%, avg=36864.00, stdev=3971.11, samples=2 00:13:00.933 iops : min= 8514, max= 9918, avg=9216.00, stdev=992.78, samples=2 00:13:00.933 lat (usec) : 750=0.08%, 1000=0.10% 00:13:00.933 lat (msec) : 2=1.83%, 4=7.05%, 10=84.10%, 20=6.33%, 50=0.51% 00:13:00.933 cpu : usr=7.88%, sys=11.96%, ctx=477, majf=0, minf=1 00:13:00.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:00.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:00.933 issued rwts: total=9216,9269,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:00.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:00.933 job3: (groupid=0, jobs=1): err= 0: pid=883790: Tue Nov 19 10:40:39 2024 00:13:00.933 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 00:13:00.933 slat (nsec): min=999, max=38281k, avg=348026.75, stdev=2246487.90 00:13:00.933 clat (usec): min=6064, max=71585, avg=43433.32, stdev=21754.03 00:13:00.933 lat (usec): min=6069, max=71591, avg=43781.34, stdev=21810.41 00:13:00.933 clat percentiles (usec): 00:13:00.933 | 1.00th=[ 6325], 5.00th=[11469], 10.00th=[14877], 20.00th=[16909], 00:13:00.933 | 30.00th=[21103], 40.00th=[38011], 50.00th=[51119], 60.00th=[56361], 00:13:00.933 | 70.00th=[63701], 80.00th=[65799], 90.00th=[67634], 95.00th=[69731], 00:13:00.933 | 99.00th=[71828], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:13:00.933 | 99.99th=[71828] 00:13:00.933 write: IOPS=2100, BW=8402KiB/s (8603kB/s)(8452KiB/1006msec); 0 zone resets 00:13:00.933 slat (nsec): min=1680, max=16136k, avg=128707.05, stdev=796471.66 00:13:00.933 clat (usec): min=3344, max=66246, avg=18151.64, stdev=11702.69 00:13:00.933 lat (usec): min=4732, max=66253, avg=18280.35, stdev=11747.60 00:13:00.933 clat percentiles (usec): 00:13:00.933 | 1.00th=[ 4883], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[12256], 00:13:00.933 | 30.00th=[12518], 40.00th=[13173], 50.00th=[14222], 60.00th=[16057], 00:13:00.933 | 70.00th=[16712], 80.00th=[18220], 90.00th=[31327], 95.00th=[52167], 00:13:00.933 | 99.00th=[60031], 99.50th=[61080], 99.90th=[66323], 99.95th=[66323], 00:13:00.933 | 99.99th=[66323] 00:13:00.933 bw ( KiB/s): min= 4600, max=11784, per=9.35%, avg=8192.00, stdev=5079.86, samples=2 00:13:00.933 iops : min= 1150, max= 2946, avg=2048.00, stdev=1269.96, samples=2 00:13:00.933 lat (msec) : 4=0.02%, 10=6.58%, 20=48.67%, 50=15.89%, 100=28.84% 00:13:00.933 cpu : usr=1.99%, sys=2.49%, ctx=180, majf=0, minf=2 00:13:00.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:00.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:00.933 issued rwts: total=2048,2113,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:00.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:00.933 00:13:00.933 Run status group 0 (all jobs): 00:13:00.933 READ: bw=82.7MiB/s (86.7MB/s), 8143KiB/s-35.9MiB/s (8339kB/s-37.6MB/s), io=83.2MiB (87.3MB), run=1004-1007msec 00:13:00.933 WRITE: bw=85.5MiB/s (89.7MB/s), 8402KiB/s-36.1MiB/s (8603kB/s-37.8MB/s), io=86.1MiB (90.3MB), run=1004-1007msec 00:13:00.933 00:13:00.933 Disk stats (read/write): 00:13:00.933 nvme0n1: ios=5227/5632, merge=0/0, ticks=40671/44792, in_queue=85463, util=84.37% 00:13:00.933 nvme0n2: ios=3125/3232, merge=0/0, ticks=20379/16979, in_queue=37358, util=87.63% 00:13:00.933 nvme0n3: ios=7227/7571, merge=0/0, ticks=51572/46148, in_queue=97720, util=95.11% 00:13:00.933 nvme0n4: ios=1760/2048, merge=0/0, ticks=20372/11913, in_queue=32285, util=97.85% 00:13:00.933 10:40:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:00.933 [global] 00:13:00.933 thread=1 00:13:00.933 invalidate=1 00:13:00.933 rw=randwrite 00:13:00.933 time_based=1 00:13:00.933 runtime=1 00:13:00.933 ioengine=libaio 00:13:00.933 direct=1 00:13:00.933 bs=4096 00:13:00.933 iodepth=128 00:13:00.933 norandommap=0 00:13:00.933 numjobs=1 00:13:00.933 00:13:00.933 verify_dump=1 00:13:00.934 verify_backlog=512 00:13:00.934 verify_state_save=0 00:13:00.934 do_verify=1 00:13:00.934 verify=crc32c-intel 00:13:00.934 [job0] 00:13:00.934 filename=/dev/nvme0n1 00:13:00.934 [job1] 00:13:00.934 filename=/dev/nvme0n2 00:13:00.934 [job2] 00:13:00.934 filename=/dev/nvme0n3 00:13:00.934 [job3] 00:13:00.934 filename=/dev/nvme0n4 00:13:00.934 Could not set queue depth (nvme0n1) 00:13:00.934 Could not set queue depth (nvme0n2) 00:13:00.934 Could not set queue depth (nvme0n3) 00:13:00.934 Could not set queue depth (nvme0n4) 00:13:01.200 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:01.200 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:01.200 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:01.200 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:01.200 fio-3.35 00:13:01.200 Starting 4 threads 00:13:02.597 00:13:02.597 job0: (groupid=0, jobs=1): err= 0: pid=884249: Tue Nov 19 10:40:41 2024 00:13:02.597 read: IOPS=7125, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1006msec) 00:13:02.597 slat (nsec): min=933, max=15815k, avg=69698.54, stdev=506252.92 00:13:02.597 clat (usec): min=3093, max=38739, avg=8789.87, stdev=3950.20 00:13:02.597 lat (usec): min=3103, max=38779, avg=8859.57, stdev=3992.26 00:13:02.597 clat percentiles (usec): 00:13:02.597 | 1.00th=[ 4490], 5.00th=[ 5407], 10.00th=[ 6390], 20.00th=[ 6849], 00:13:02.597 | 30.00th=[ 7111], 40.00th=[ 7373], 50.00th=[ 7635], 60.00th=[ 8094], 00:13:02.597 | 70.00th=[ 8356], 80.00th=[ 9372], 90.00th=[12649], 95.00th=[16712], 00:13:02.597 | 99.00th=[29230], 99.50th=[29754], 99.90th=[29754], 99.95th=[29754], 00:13:02.597 | 99.99th=[38536] 00:13:02.597 write: IOPS=7258, BW=28.4MiB/s (29.7MB/s)(28.5MiB/1006msec); 0 zone resets 00:13:02.597 slat (nsec): min=1604, max=10072k, avg=62614.32, stdev=401329.92 00:13:02.597 clat (usec): min=1430, max=24796, avg=8781.33, stdev=3489.26 00:13:02.597 lat (usec): min=1457, max=24798, avg=8843.94, stdev=3509.89 00:13:02.597 clat percentiles (usec): 00:13:02.597 | 1.00th=[ 3720], 5.00th=[ 4686], 10.00th=[ 5735], 20.00th=[ 6718], 00:13:02.597 | 30.00th=[ 7177], 40.00th=[ 7439], 50.00th=[ 7767], 60.00th=[ 8160], 00:13:02.597 | 70.00th=[ 8717], 80.00th=[10290], 90.00th=[13566], 95.00th=[16712], 00:13:02.597 | 99.00th=[21365], 99.50th=[22414], 99.90th=[23987], 99.95th=[24511], 00:13:02.597 | 99.99th=[24773] 00:13:02.597 bw ( KiB/s): min=24696, max=32768, per=28.34%, avg=28732.00, stdev=5707.77, samples=2 00:13:02.597 iops : min= 6174, max= 8192, avg=7183.00, stdev=1426.94, samples=2 00:13:02.597 lat (msec) : 2=0.01%, 4=1.11%, 10=79.22%, 20=17.47%, 50=2.18% 00:13:02.597 cpu : usr=4.28%, sys=6.57%, ctx=627, majf=0, minf=1 00:13:02.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:02.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:02.597 issued rwts: total=7168,7302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:02.597 job1: (groupid=0, jobs=1): err= 0: pid=884265: Tue Nov 19 10:40:41 2024 00:13:02.597 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:13:02.597 slat (nsec): min=929, max=10069k, avg=71324.87, stdev=490669.40 00:13:02.597 clat (usec): min=2021, max=21951, avg=9499.20, stdev=2663.80 00:13:02.597 lat (usec): min=2058, max=22048, avg=9570.53, stdev=2695.09 00:13:02.597 clat percentiles (usec): 00:13:02.597 | 1.00th=[ 3556], 5.00th=[ 5276], 10.00th=[ 6521], 20.00th=[ 7701], 00:13:02.597 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9634], 00:13:02.597 | 70.00th=[10421], 80.00th=[11600], 90.00th=[12911], 95.00th=[14353], 00:13:02.597 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18482], 99.95th=[18482], 00:13:02.597 | 99.99th=[21890] 00:13:02.598 write: IOPS=6724, BW=26.3MiB/s (27.5MB/s)(26.4MiB/1004msec); 0 zone resets 00:13:02.598 slat (nsec): min=1596, max=10071k, avg=71222.07, stdev=472703.63 00:13:02.598 clat (usec): min=2991, max=33422, avg=9461.36, stdev=3986.75 00:13:02.598 lat (usec): min=3001, max=33424, avg=9532.59, stdev=4015.24 00:13:02.598 clat percentiles (usec): 00:13:02.598 | 1.00th=[ 4113], 5.00th=[ 4948], 10.00th=[ 5800], 20.00th=[ 6980], 00:13:02.598 | 30.00th=[ 7832], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8979], 00:13:02.598 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[14484], 95.00th=[15139], 00:13:02.598 | 99.00th=[31065], 99.50th=[31851], 99.90th=[33424], 99.95th=[33424], 00:13:02.598 | 99.99th=[33424] 00:13:02.598 bw ( KiB/s): min=24704, max=28584, per=26.28%, avg=26644.00, stdev=2743.57, samples=2 00:13:02.598 iops : min= 6176, max= 7146, avg=6661.00, stdev=685.89, samples=2 00:13:02.598 lat (msec) : 4=1.09%, 10=66.17%, 20=31.74%, 50=1.00% 00:13:02.598 cpu : usr=3.89%, sys=7.38%, ctx=517, majf=0, minf=1 00:13:02.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:13:02.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:02.598 issued rwts: total=6656,6751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:02.598 job2: (groupid=0, jobs=1): err= 0: pid=884286: Tue Nov 19 10:40:41 2024 00:13:02.598 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:13:02.598 slat (nsec): min=919, max=7908.9k, avg=91970.06, stdev=482641.00 00:13:02.598 clat (usec): min=6062, max=37501, avg=11318.11, stdev=2931.85 00:13:02.598 lat (usec): min=6065, max=37506, avg=11410.08, stdev=2935.81 00:13:02.598 clat percentiles (usec): 00:13:02.598 | 1.00th=[ 7373], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[ 9634], 00:13:02.598 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10683], 60.00th=[11076], 00:13:02.598 | 70.00th=[11600], 80.00th=[12649], 90.00th=[14222], 95.00th=[15270], 00:13:02.598 | 99.00th=[23462], 99.50th=[34866], 99.90th=[37487], 99.95th=[37487], 00:13:02.598 | 99.99th=[37487] 00:13:02.598 write: IOPS=5799, BW=22.7MiB/s (23.8MB/s)(22.7MiB/1002msec); 0 zone resets 00:13:02.598 slat (nsec): min=1535, max=9225.2k, avg=79997.54, stdev=408301.75 00:13:02.598 clat (usec): min=1182, max=41945, avg=10910.95, stdev=3851.00 00:13:02.598 lat (usec): min=1192, max=41952, avg=10990.95, stdev=3856.05 00:13:02.598 clat percentiles (usec): 00:13:02.598 | 1.00th=[ 4424], 5.00th=[ 7767], 10.00th=[ 8094], 20.00th=[ 8586], 00:13:02.598 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10552], 00:13:02.598 | 70.00th=[10945], 80.00th=[11994], 90.00th=[14746], 95.00th=[15664], 00:13:02.598 | 99.00th=[33817], 99.50th=[35914], 99.90th=[41681], 99.95th=[41681], 00:13:02.598 | 99.99th=[42206] 00:13:02.598 bw ( KiB/s): min=20896, max=24576, per=22.43%, avg=22736.00, stdev=2602.15, samples=2 00:13:02.598 iops : min= 5224, max= 6144, avg=5684.00, stdev=650.54, samples=2 00:13:02.598 lat (msec) : 2=0.25%, 10=37.28%, 20=60.60%, 50=1.86% 00:13:02.598 cpu : usr=1.70%, sys=3.90%, ctx=674, majf=0, minf=2 00:13:02.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:02.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:02.598 issued rwts: total=5632,5811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:02.598 job3: (groupid=0, jobs=1): err= 0: pid=884297: Tue Nov 19 10:40:41 2024 00:13:02.598 read: IOPS=5314, BW=20.8MiB/s (21.8MB/s)(20.8MiB/1003msec) 00:13:02.598 slat (nsec): min=979, max=14278k, avg=93272.43, stdev=594334.52 00:13:02.598 clat (usec): min=1109, max=38353, avg=11536.61, stdev=4841.16 00:13:02.598 lat (usec): min=3906, max=38382, avg=11629.88, stdev=4889.32 00:13:02.598 clat percentiles (usec): 00:13:02.598 | 1.00th=[ 5866], 5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[ 8356], 00:13:02.598 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10683], 00:13:02.598 | 70.00th=[11994], 80.00th=[13698], 90.00th=[17695], 95.00th=[21627], 00:13:02.598 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:13:02.598 | 99.99th=[38536] 00:13:02.598 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:13:02.598 slat (nsec): min=1629, max=9116.6k, avg=84321.31, stdev=441885.50 00:13:02.598 clat (usec): min=4642, max=35152, avg=11530.31, stdev=6027.18 00:13:02.598 lat (usec): min=4650, max=35158, avg=11614.63, stdev=6072.74 00:13:02.598 clat percentiles (usec): 00:13:02.598 | 1.00th=[ 5407], 5.00th=[ 7439], 10.00th=[ 7832], 20.00th=[ 8094], 00:13:02.598 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9503], 00:13:02.598 | 70.00th=[10814], 80.00th=[12780], 90.00th=[21890], 95.00th=[26870], 00:13:02.598 | 99.00th=[32375], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:13:02.598 | 99.99th=[35390] 00:13:02.598 bw ( KiB/s): min=16616, max=28440, per=22.22%, avg=22528.00, stdev=8360.83, samples=2 00:13:02.598 iops : min= 4154, max= 7110, avg=5632.00, stdev=2090.21, samples=2 00:13:02.598 lat (msec) : 2=0.01%, 4=0.09%, 10=59.42%, 20=31.73%, 50=8.75% 00:13:02.598 cpu : usr=3.29%, sys=6.29%, ctx=626, majf=0, minf=1 00:13:02.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:02.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:02.598 issued rwts: total=5330,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:02.598 00:13:02.598 Run status group 0 (all jobs): 00:13:02.598 READ: bw=96.2MiB/s (101MB/s), 20.8MiB/s-27.8MiB/s (21.8MB/s-29.2MB/s), io=96.8MiB (102MB), run=1002-1006msec 00:13:02.598 WRITE: bw=99.0MiB/s (104MB/s), 21.9MiB/s-28.4MiB/s (23.0MB/s-29.7MB/s), io=99.6MiB (104MB), run=1002-1006msec 00:13:02.598 00:13:02.598 Disk stats (read/write): 00:13:02.598 nvme0n1: ios=6079/6144, merge=0/0, ticks=27033/25280, in_queue=52313, util=99.50% 00:13:02.598 nvme0n2: ios=5279/5632, merge=0/0, ticks=29463/30374, in_queue=59837, util=95.92% 00:13:02.598 nvme0n3: ios=4608/4855, merge=0/0, ticks=14189/13213, in_queue=27402, util=87.76% 00:13:02.598 nvme0n4: ios=4301/4608, merge=0/0, ticks=24660/25095, in_queue=49755, util=97.97% 00:13:02.598 10:40:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:02.598 10:40:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=884459 00:13:02.598 10:40:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:02.598 10:40:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:02.598 [global] 00:13:02.598 thread=1 00:13:02.598 invalidate=1 00:13:02.598 rw=read 00:13:02.598 time_based=1 00:13:02.598 runtime=10 00:13:02.598 ioengine=libaio 00:13:02.598 direct=1 00:13:02.598 bs=4096 00:13:02.598 iodepth=1 00:13:02.598 norandommap=1 00:13:02.598 numjobs=1 00:13:02.598 00:13:02.598 [job0] 00:13:02.598 filename=/dev/nvme0n1 00:13:02.598 [job1] 00:13:02.598 filename=/dev/nvme0n2 00:13:02.598 [job2] 00:13:02.598 filename=/dev/nvme0n3 00:13:02.598 [job3] 00:13:02.598 filename=/dev/nvme0n4 00:13:02.598 Could not set queue depth (nvme0n1) 00:13:02.598 Could not set queue depth (nvme0n2) 00:13:02.598 Could not set queue depth (nvme0n3) 00:13:02.598 Could not set queue depth (nvme0n4) 00:13:02.860 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.860 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.860 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.860 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.860 fio-3.35 00:13:02.860 Starting 4 threads 00:13:05.407 10:40:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:05.407 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=266240, buflen=4096 00:13:05.407 fio: pid=884814, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:05.667 10:40:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:05.667 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11272192, buflen=4096 00:13:05.667 fio: pid=884807, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:05.667 10:40:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:05.667 10:40:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:05.929 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=294912, buflen=4096 00:13:05.929 fio: pid=884773, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:05.929 10:40:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:05.929 10:40:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:06.191 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=5181440, buflen=4096 00:13:06.191 fio: pid=884788, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:13:06.191 10:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:06.191 10:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:06.191 00:13:06.191 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=884773: Tue Nov 19 10:40:45 2024 00:13:06.191 read: IOPS=24, BW=96.8KiB/s (99.2kB/s)(288KiB/2974msec) 00:13:06.191 slat (usec): min=25, max=668, avg=38.48, stdev=79.23 00:13:06.191 clat (usec): min=997, max=42088, avg=40921.78, stdev=4793.04 00:13:06.191 lat (usec): min=1039, max=42129, avg=40960.43, stdev=4793.38 00:13:06.191 clat percentiles (usec): 00:13:06.191 | 1.00th=[ 996], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:06.191 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:13:06.191 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:13:06.191 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:06.191 | 99.99th=[42206] 00:13:06.191 bw ( KiB/s): min= 96, max= 104, per=1.85%, avg=97.60, stdev= 3.58, samples=5 00:13:06.191 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:13:06.191 lat (usec) : 1000=1.37% 00:13:06.191 lat (msec) : 50=97.26% 00:13:06.191 cpu : usr=0.03%, sys=0.03%, ctx=77, majf=0, minf=2 00:13:06.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:06.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.191 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.191 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:06.191 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=884788: Tue Nov 19 10:40:45 2024 00:13:06.191 read: IOPS=400, BW=1599KiB/s (1637kB/s)(5060KiB/3165msec) 00:13:06.191 slat (usec): min=7, max=13744, avg=46.56, stdev=465.00 00:13:06.191 clat (usec): min=531, max=45966, avg=2448.80, stdev=7343.94 00:13:06.191 lat (usec): min=556, max=55081, avg=2489.62, stdev=7437.48 00:13:06.191 clat percentiles (usec): 00:13:06.191 | 1.00th=[ 750], 5.00th=[ 873], 10.00th=[ 930], 20.00th=[ 979], 00:13:06.191 | 30.00th=[ 1012], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1090], 00:13:06.191 | 70.00th=[ 1123], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1254], 00:13:06.191 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[45876], 00:13:06.191 | 99.99th=[45876] 00:13:06.191 bw ( KiB/s): min= 96, max= 3664, per=32.02%, avg=1681.33, stdev=1764.01, samples=6 00:13:06.191 iops : min= 24, max= 916, avg=420.33, stdev=441.00, samples=6 00:13:06.191 lat (usec) : 750=1.11%, 1000=24.64% 00:13:06.191 lat (msec) : 2=70.70%, 50=3.48% 00:13:06.191 cpu : usr=0.41%, sys=1.42%, ctx=1269, majf=0, minf=2 00:13:06.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:06.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.191 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.191 issued rwts: total=1266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:06.191 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=884807: Tue Nov 19 10:40:45 2024 00:13:06.191 read: IOPS=987, BW=3947KiB/s (4042kB/s)(10.8MiB/2789msec) 00:13:06.191 slat (usec): min=6, max=16375, avg=35.25, stdev=376.29 00:13:06.191 clat (usec): min=276, max=41890, avg=963.88, stdev=1343.95 00:13:06.191 lat (usec): min=302, max=41915, avg=999.14, stdev=1395.22 00:13:06.191 clat percentiles (usec): 00:13:06.191 | 1.00th=[ 660], 5.00th=[ 750], 10.00th=[ 799], 20.00th=[ 848], 00:13:06.191 | 30.00th=[ 889], 40.00th=[ 914], 50.00th=[ 930], 60.00th=[ 955], 00:13:06.191 | 70.00th=[ 971], 80.00th=[ 988], 90.00th=[ 1020], 95.00th=[ 1057], 00:13:06.191 | 99.00th=[ 1139], 99.50th=[ 1172], 99.90th=[41157], 99.95th=[41681], 00:13:06.191 | 99.99th=[41681] 00:13:06.191 bw ( KiB/s): min= 3080, max= 4360, per=76.10%, avg=3995.20, stdev=529.50, samples=5 00:13:06.191 iops : min= 770, max= 1090, avg=998.80, stdev=132.38, samples=5 00:13:06.191 lat (usec) : 500=0.15%, 750=4.87%, 1000=79.95% 00:13:06.191 lat (msec) : 2=14.89%, 50=0.11% 00:13:06.191 cpu : usr=0.82%, sys=3.19%, ctx=2755, majf=0, minf=1 00:13:06.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:06.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.191 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.191 issued rwts: total=2753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:06.191 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=884814: Tue Nov 19 10:40:45 2024 00:13:06.191 read: IOPS=25, BW=99.7KiB/s (102kB/s)(260KiB/2607msec) 00:13:06.191 slat (nsec): min=25400, max=73083, avg=26484.56, stdev=5839.88 00:13:06.191 clat (usec): min=582, max=41646, avg=39743.53, stdev=6987.10 00:13:06.191 lat (usec): min=611, max=41671, avg=39770.02, stdev=6982.74 00:13:06.191 clat percentiles (usec): 00:13:06.191 | 1.00th=[ 586], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:06.191 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:06.191 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:06.191 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:06.191 | 99.99th=[41681] 00:13:06.191 bw ( KiB/s): min= 96, max= 112, per=1.89%, avg=99.20, stdev= 7.16, samples=5 00:13:06.191 iops : min= 24, max= 28, avg=24.80, stdev= 1.79, samples=5 00:13:06.191 lat (usec) : 750=1.52% 00:13:06.191 lat (msec) : 2=1.52%, 50=95.45% 00:13:06.191 cpu : usr=0.00%, sys=0.12%, ctx=66, majf=0, minf=2 00:13:06.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:06.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.191 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.191 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:06.191 00:13:06.191 Run status group 0 (all jobs): 00:13:06.191 READ: bw=5250KiB/s (5376kB/s), 96.8KiB/s-3947KiB/s (99.2kB/s-4042kB/s), io=16.2MiB (17.0MB), run=2607-3165msec 00:13:06.191 00:13:06.191 Disk stats (read/write): 00:13:06.191 nvme0n1: ios=97/0, merge=0/0, ticks=3541/0, in_queue=3541, util=99.80% 00:13:06.191 nvme0n2: ios=1263/0, merge=0/0, ticks=2990/0, in_queue=2990, util=95.07% 00:13:06.191 nvme0n3: ios=2587/0, merge=0/0, ticks=2408/0, in_queue=2408, util=96.03% 00:13:06.191 nvme0n4: ios=65/0, merge=0/0, ticks=2584/0, in_queue=2584, util=96.39% 00:13:06.191 10:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:06.191 10:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:06.453 10:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:06.453 10:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:06.714 10:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:06.714 10:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:06.714 10:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:06.714 10:40:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:06.974 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:06.974 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 884459 00:13:06.974 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:06.974 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.975 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.975 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:13:06.975 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.975 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:07.235 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:07.235 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.235 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:13:07.235 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:07.235 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:07.235 nvmf hotplug test: fio failed as expected 00:13:07.235 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.235 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:07.235 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:07.235 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:07.235 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:07.235 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:07.235 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:07.235 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:13:07.235 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:07.235 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:13:07.235 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:07.235 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:07.235 rmmod nvme_tcp 00:13:07.235 rmmod nvme_fabrics 00:13:07.496 rmmod nvme_keyring 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 880944 ']' 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 880944 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 880944 ']' 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 880944 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 880944 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 880944' 00:13:07.496 killing process with pid 880944 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 880944 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 880944 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.496 10:40:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:10.041 00:13:10.041 real 0m29.474s 00:13:10.041 user 2m34.410s 00:13:10.041 sys 0m9.580s 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.041 ************************************ 00:13:10.041 END TEST nvmf_fio_target 00:13:10.041 ************************************ 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:10.041 ************************************ 00:13:10.041 START TEST nvmf_bdevio 00:13:10.041 ************************************ 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:10.041 * Looking for test storage... 00:13:10.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:10.041 10:40:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:10.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.041 --rc genhtml_branch_coverage=1 00:13:10.041 --rc genhtml_function_coverage=1 00:13:10.041 --rc genhtml_legend=1 00:13:10.041 --rc geninfo_all_blocks=1 00:13:10.041 --rc geninfo_unexecuted_blocks=1 00:13:10.041 00:13:10.041 ' 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:10.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.041 --rc genhtml_branch_coverage=1 00:13:10.041 --rc genhtml_function_coverage=1 00:13:10.041 --rc genhtml_legend=1 00:13:10.041 --rc geninfo_all_blocks=1 00:13:10.041 --rc geninfo_unexecuted_blocks=1 00:13:10.041 00:13:10.041 ' 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:10.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.041 --rc genhtml_branch_coverage=1 00:13:10.041 --rc genhtml_function_coverage=1 00:13:10.041 --rc genhtml_legend=1 00:13:10.041 --rc geninfo_all_blocks=1 00:13:10.041 --rc geninfo_unexecuted_blocks=1 00:13:10.041 00:13:10.041 ' 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:10.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.041 --rc genhtml_branch_coverage=1 00:13:10.041 --rc genhtml_function_coverage=1 00:13:10.041 --rc genhtml_legend=1 00:13:10.041 --rc geninfo_all_blocks=1 00:13:10.041 --rc geninfo_unexecuted_blocks=1 00:13:10.041 00:13:10.041 ' 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.041 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:10.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:13:10.042 10:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:18.179 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:18.180 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:18.180 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:18.180 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:18.180 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:18.180 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:18.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:13:18.181 00:13:18.181 --- 10.0.0.2 ping statistics --- 00:13:18.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.181 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:18.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:13:18.181 00:13:18.181 --- 10.0.0.1 ping statistics --- 00:13:18.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.181 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=889999 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 889999 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 889999 ']' 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.181 10:40:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:18.181 [2024-11-19 10:40:56.529571] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:13:18.181 [2024-11-19 10:40:56.529639] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.181 [2024-11-19 10:40:56.628023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:18.181 [2024-11-19 10:40:56.680504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.181 [2024-11-19 10:40:56.680556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.181 [2024-11-19 10:40:56.680564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.181 [2024-11-19 10:40:56.680572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.181 [2024-11-19 10:40:56.680578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.181 [2024-11-19 10:40:56.682608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:18.181 [2024-11-19 10:40:56.682767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:18.181 [2024-11-19 10:40:56.682930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.181 [2024-11-19 10:40:56.682930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:18.181 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.181 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:13:18.181 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:18.181 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:18.181 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:18.443 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.443 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:18.443 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.443 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:18.443 [2024-11-19 10:40:57.406020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.443 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.443 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:18.443 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.443 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:18.443 Malloc0 00:13:18.443 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.443 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:18.444 [2024-11-19 10:40:57.484075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:18.444 { 00:13:18.444 "params": { 00:13:18.444 "name": "Nvme$subsystem", 00:13:18.444 "trtype": "$TEST_TRANSPORT", 00:13:18.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:18.444 "adrfam": "ipv4", 00:13:18.444 "trsvcid": "$NVMF_PORT", 00:13:18.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:18.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:18.444 "hdgst": ${hdgst:-false}, 00:13:18.444 "ddgst": ${ddgst:-false} 00:13:18.444 }, 00:13:18.444 "method": "bdev_nvme_attach_controller" 00:13:18.444 } 00:13:18.444 EOF 00:13:18.444 )") 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:13:18.444 10:40:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:18.444 "params": { 00:13:18.444 "name": "Nvme1", 00:13:18.444 "trtype": "tcp", 00:13:18.444 "traddr": "10.0.0.2", 00:13:18.444 "adrfam": "ipv4", 00:13:18.444 "trsvcid": "4420", 00:13:18.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:18.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:18.444 "hdgst": false, 00:13:18.444 "ddgst": false 00:13:18.444 }, 00:13:18.444 "method": "bdev_nvme_attach_controller" 00:13:18.444 }' 00:13:18.444 [2024-11-19 10:40:57.543179] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:13:18.444 [2024-11-19 10:40:57.543246] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid890090 ] 00:13:18.444 [2024-11-19 10:40:57.636199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:18.704 [2024-11-19 10:40:57.692839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.704 [2024-11-19 10:40:57.693005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.704 [2024-11-19 10:40:57.693005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.963 I/O targets: 00:13:18.963 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:18.963 00:13:18.963 00:13:18.963 CUnit - A unit testing framework for C - Version 2.1-3 00:13:18.963 http://cunit.sourceforge.net/ 00:13:18.963 00:13:18.963 00:13:18.963 Suite: bdevio tests on: Nvme1n1 00:13:18.963 Test: blockdev write read block ...passed 00:13:18.963 Test: blockdev write zeroes read block ...passed 00:13:18.963 Test: blockdev write zeroes read no split ...passed 00:13:19.223 Test: blockdev write zeroes read split ...passed 00:13:19.223 Test: blockdev write zeroes read split partial ...passed 00:13:19.223 Test: blockdev reset ...[2024-11-19 10:40:58.242056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:19.223 [2024-11-19 10:40:58.242155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f8970 (9): Bad file descriptor 00:13:19.224 [2024-11-19 10:40:58.254904] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:19.224 passed 00:13:19.224 Test: blockdev write read 8 blocks ...passed 00:13:19.224 Test: blockdev write read size > 128k ...passed 00:13:19.224 Test: blockdev write read invalid size ...passed 00:13:19.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:19.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:19.224 Test: blockdev write read max offset ...passed 00:13:19.483 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:19.483 Test: blockdev writev readv 8 blocks ...passed 00:13:19.483 Test: blockdev writev readv 30 x 1block ...passed 00:13:19.483 Test: blockdev writev readv block ...passed 00:13:19.483 Test: blockdev writev readv size > 128k ...passed 00:13:19.483 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:19.483 Test: blockdev comparev and writev ...[2024-11-19 10:40:58.561141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.483 [2024-11-19 10:40:58.561177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:19.483 [2024-11-19 10:40:58.561193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.483 [2024-11-19 10:40:58.561202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:19.483 [2024-11-19 10:40:58.561679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.484 [2024-11-19 10:40:58.561691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:19.484 [2024-11-19 10:40:58.561705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.484 [2024-11-19 10:40:58.561713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:19.484 [2024-11-19 10:40:58.562173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.484 [2024-11-19 10:40:58.562185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:19.484 [2024-11-19 10:40:58.562199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.484 [2024-11-19 10:40:58.562207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:19.484 [2024-11-19 10:40:58.562690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.484 [2024-11-19 10:40:58.562703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:19.484 [2024-11-19 10:40:58.562717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.484 [2024-11-19 10:40:58.562726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:19.484 passed 00:13:19.484 Test: blockdev nvme passthru rw ...passed 00:13:19.484 Test: blockdev nvme passthru vendor specific ...[2024-11-19 10:40:58.646988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:19.484 [2024-11-19 10:40:58.647007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:19.484 [2024-11-19 10:40:58.647348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:19.484 [2024-11-19 10:40:58.647360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:19.484 [2024-11-19 10:40:58.647703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:19.484 [2024-11-19 10:40:58.647714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:19.484 [2024-11-19 10:40:58.648057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:19.484 [2024-11-19 10:40:58.648069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:19.484 passed 00:13:19.484 Test: blockdev nvme admin passthru ...passed 00:13:19.742 Test: blockdev copy ...passed 00:13:19.742 00:13:19.742 Run Summary: Type Total Ran Passed Failed Inactive 00:13:19.742 suites 1 1 n/a 0 0 00:13:19.742 tests 23 23 23 0 0 00:13:19.742 asserts 152 152 152 0 n/a 00:13:19.742 00:13:19.742 Elapsed time = 1.367 seconds 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:19.743 rmmod nvme_tcp 00:13:19.743 rmmod nvme_fabrics 00:13:19.743 rmmod nvme_keyring 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 889999 ']' 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 889999 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 889999 ']' 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 889999 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.743 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 889999 00:13:20.002 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:13:20.002 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:13:20.002 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 889999' 00:13:20.002 killing process with pid 889999 00:13:20.002 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 889999 00:13:20.002 10:40:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 889999 00:13:20.002 10:40:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:20.002 10:40:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:20.002 10:40:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:20.002 10:40:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:13:20.002 10:40:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:13:20.003 10:40:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:13:20.003 10:40:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:20.003 10:40:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:20.003 10:40:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:20.003 10:40:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.003 10:40:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.003 10:40:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.544 10:41:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:22.544 00:13:22.544 real 0m12.338s 00:13:22.544 user 0m14.445s 00:13:22.544 sys 0m6.184s 00:13:22.544 10:41:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.544 10:41:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:22.544 ************************************ 00:13:22.544 END TEST nvmf_bdevio 00:13:22.544 ************************************ 00:13:22.544 10:41:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:22.544 00:13:22.544 real 5m4.679s 00:13:22.544 user 11m50.188s 00:13:22.544 sys 1m52.113s 00:13:22.544 10:41:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.544 10:41:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:22.544 ************************************ 00:13:22.544 END TEST nvmf_target_core 00:13:22.544 ************************************ 00:13:22.544 10:41:01 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:22.544 10:41:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:22.544 10:41:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.544 10:41:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:22.544 ************************************ 00:13:22.544 START TEST nvmf_target_extra 00:13:22.545 ************************************ 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:22.545 * Looking for test storage... 00:13:22.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:22.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.545 --rc genhtml_branch_coverage=1 00:13:22.545 --rc genhtml_function_coverage=1 00:13:22.545 --rc genhtml_legend=1 00:13:22.545 --rc geninfo_all_blocks=1 00:13:22.545 --rc geninfo_unexecuted_blocks=1 00:13:22.545 00:13:22.545 ' 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:22.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.545 --rc genhtml_branch_coverage=1 00:13:22.545 --rc genhtml_function_coverage=1 00:13:22.545 --rc genhtml_legend=1 00:13:22.545 --rc geninfo_all_blocks=1 00:13:22.545 --rc geninfo_unexecuted_blocks=1 00:13:22.545 00:13:22.545 ' 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:22.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.545 --rc genhtml_branch_coverage=1 00:13:22.545 --rc genhtml_function_coverage=1 00:13:22.545 --rc genhtml_legend=1 00:13:22.545 --rc geninfo_all_blocks=1 00:13:22.545 --rc geninfo_unexecuted_blocks=1 00:13:22.545 00:13:22.545 ' 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:22.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.545 --rc genhtml_branch_coverage=1 00:13:22.545 --rc genhtml_function_coverage=1 00:13:22.545 --rc genhtml_legend=1 00:13:22.545 --rc geninfo_all_blocks=1 00:13:22.545 --rc geninfo_unexecuted_blocks=1 00:13:22.545 00:13:22.545 ' 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:22.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:22.545 ************************************ 00:13:22.545 START TEST nvmf_example 00:13:22.545 ************************************ 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:22.545 * Looking for test storage... 00:13:22.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:22.545 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.806 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:13:22.806 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:13:22.806 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.806 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:13:22.806 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:13:22.806 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:13:22.806 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:13:22.806 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.806 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:13:22.806 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:13:22.806 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:22.806 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:22.806 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:13:22.806 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.806 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.806 --rc genhtml_branch_coverage=1 00:13:22.806 --rc genhtml_function_coverage=1 00:13:22.806 --rc genhtml_legend=1 00:13:22.806 --rc geninfo_all_blocks=1 00:13:22.806 --rc geninfo_unexecuted_blocks=1 00:13:22.806 00:13:22.806 ' 00:13:22.806 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.806 --rc genhtml_branch_coverage=1 00:13:22.806 --rc genhtml_function_coverage=1 00:13:22.806 --rc genhtml_legend=1 00:13:22.806 --rc geninfo_all_blocks=1 00:13:22.806 --rc geninfo_unexecuted_blocks=1 00:13:22.806 00:13:22.806 ' 00:13:22.806 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.806 --rc genhtml_branch_coverage=1 00:13:22.806 --rc genhtml_function_coverage=1 00:13:22.806 --rc genhtml_legend=1 00:13:22.806 --rc geninfo_all_blocks=1 00:13:22.806 --rc geninfo_unexecuted_blocks=1 00:13:22.806 00:13:22.806 ' 00:13:22.806 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.806 --rc genhtml_branch_coverage=1 00:13:22.806 --rc genhtml_function_coverage=1 00:13:22.806 --rc genhtml_legend=1 00:13:22.806 --rc geninfo_all_blocks=1 00:13:22.806 --rc geninfo_unexecuted_blocks=1 00:13:22.807 00:13:22.807 ' 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:22.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:13:22.807 10:41:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.948 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:30.949 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:30.949 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:30.949 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:30.949 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.949 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:30.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:13:30.949 00:13:30.949 --- 10.0.0.2 ping statistics --- 00:13:30.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.949 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:13:30.949 00:13:30.949 --- 10.0.0.1 ping statistics --- 00:13:30.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.949 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=894762 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 894762 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 894762 ']' 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.949 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:30.950 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.950 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:30.950 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.211 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:31.212 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.212 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.212 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.212 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:31.212 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.212 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:31.212 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:43.448 Initializing NVMe Controllers 00:13:43.448 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:43.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:43.448 Initialization complete. Launching workers. 00:13:43.448 ======================================================== 00:13:43.448 Latency(us) 00:13:43.448 Device Information : IOPS MiB/s Average min max 00:13:43.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19191.28 74.97 3334.64 621.33 15491.64 00:13:43.448 ======================================================== 00:13:43.448 Total : 19191.28 74.97 3334.64 621.33 15491.64 00:13:43.448 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:43.448 rmmod nvme_tcp 00:13:43.448 rmmod nvme_fabrics 00:13:43.448 rmmod nvme_keyring 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 894762 ']' 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 894762 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 894762 ']' 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 894762 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 894762 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 894762' 00:13:43.448 killing process with pid 894762 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 894762 00:13:43.448 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 894762 00:13:43.448 nvmf threads initialize successfully 00:13:43.448 bdev subsystem init successfully 00:13:43.448 created a nvmf target service 00:13:43.448 create targets's poll groups done 00:13:43.448 all subsystems of target started 00:13:43.448 nvmf target is running 00:13:43.449 all subsystems of target stopped 00:13:43.449 destroy targets's poll groups done 00:13:43.449 destroyed the nvmf target service 00:13:43.449 bdev subsystem finish successfully 00:13:43.449 nvmf threads destroy successfully 00:13:43.449 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:43.449 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:43.449 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:43.449 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:13:43.449 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:13:43.449 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:13:43.449 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:43.449 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:43.449 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:43.449 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.449 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.449 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.020 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:44.021 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:44.021 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:44.021 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:44.021 00:13:44.021 real 0m21.411s 00:13:44.021 user 0m46.774s 00:13:44.021 sys 0m6.977s 00:13:44.021 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.021 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:44.021 ************************************ 00:13:44.021 END TEST nvmf_example 00:13:44.021 ************************************ 00:13:44.021 10:41:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:44.021 10:41:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:44.021 10:41:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.021 10:41:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:44.021 ************************************ 00:13:44.021 START TEST nvmf_filesystem 00:13:44.021 ************************************ 00:13:44.021 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:44.021 * Looking for test storage... 00:13:44.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.021 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:44.021 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:13:44.021 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:44.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.294 --rc genhtml_branch_coverage=1 00:13:44.294 --rc genhtml_function_coverage=1 00:13:44.294 --rc genhtml_legend=1 00:13:44.294 --rc geninfo_all_blocks=1 00:13:44.294 --rc geninfo_unexecuted_blocks=1 00:13:44.294 00:13:44.294 ' 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:44.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.294 --rc genhtml_branch_coverage=1 00:13:44.294 --rc genhtml_function_coverage=1 00:13:44.294 --rc genhtml_legend=1 00:13:44.294 --rc geninfo_all_blocks=1 00:13:44.294 --rc geninfo_unexecuted_blocks=1 00:13:44.294 00:13:44.294 ' 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:44.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.294 --rc genhtml_branch_coverage=1 00:13:44.294 --rc genhtml_function_coverage=1 00:13:44.294 --rc genhtml_legend=1 00:13:44.294 --rc geninfo_all_blocks=1 00:13:44.294 --rc geninfo_unexecuted_blocks=1 00:13:44.294 00:13:44.294 ' 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:44.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.294 --rc genhtml_branch_coverage=1 00:13:44.294 --rc genhtml_function_coverage=1 00:13:44.294 --rc genhtml_legend=1 00:13:44.294 --rc geninfo_all_blocks=1 00:13:44.294 --rc geninfo_unexecuted_blocks=1 00:13:44.294 00:13:44.294 ' 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:44.294 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:44.295 #define SPDK_CONFIG_H 00:13:44.295 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:44.295 #define SPDK_CONFIG_APPS 1 00:13:44.295 #define SPDK_CONFIG_ARCH native 00:13:44.295 #undef SPDK_CONFIG_ASAN 00:13:44.295 #undef SPDK_CONFIG_AVAHI 00:13:44.295 #undef SPDK_CONFIG_CET 00:13:44.295 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:44.295 #define SPDK_CONFIG_COVERAGE 1 00:13:44.295 #define SPDK_CONFIG_CROSS_PREFIX 00:13:44.295 #undef SPDK_CONFIG_CRYPTO 00:13:44.295 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:44.295 #undef SPDK_CONFIG_CUSTOMOCF 00:13:44.295 #undef SPDK_CONFIG_DAOS 00:13:44.295 #define SPDK_CONFIG_DAOS_DIR 00:13:44.295 #define SPDK_CONFIG_DEBUG 1 00:13:44.295 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:44.295 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:44.295 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:44.295 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:44.295 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:44.295 #undef SPDK_CONFIG_DPDK_UADK 00:13:44.295 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:44.295 #define SPDK_CONFIG_EXAMPLES 1 00:13:44.295 #undef SPDK_CONFIG_FC 00:13:44.295 #define SPDK_CONFIG_FC_PATH 00:13:44.295 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:44.295 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:44.295 #define SPDK_CONFIG_FSDEV 1 00:13:44.295 #undef SPDK_CONFIG_FUSE 00:13:44.295 #undef SPDK_CONFIG_FUZZER 00:13:44.295 #define SPDK_CONFIG_FUZZER_LIB 00:13:44.295 #undef SPDK_CONFIG_GOLANG 00:13:44.295 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:44.295 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:44.295 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:44.295 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:44.295 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:44.295 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:44.295 #undef SPDK_CONFIG_HAVE_LZ4 00:13:44.295 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:44.295 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:44.295 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:44.295 #define SPDK_CONFIG_IDXD 1 00:13:44.295 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:44.295 #undef SPDK_CONFIG_IPSEC_MB 00:13:44.295 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:44.295 #define SPDK_CONFIG_ISAL 1 00:13:44.295 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:44.295 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:44.295 #define SPDK_CONFIG_LIBDIR 00:13:44.295 #undef SPDK_CONFIG_LTO 00:13:44.295 #define SPDK_CONFIG_MAX_LCORES 128 00:13:44.295 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:44.295 #define SPDK_CONFIG_NVME_CUSE 1 00:13:44.295 #undef SPDK_CONFIG_OCF 00:13:44.295 #define SPDK_CONFIG_OCF_PATH 00:13:44.295 #define SPDK_CONFIG_OPENSSL_PATH 00:13:44.295 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:44.295 #define SPDK_CONFIG_PGO_DIR 00:13:44.295 #undef SPDK_CONFIG_PGO_USE 00:13:44.295 #define SPDK_CONFIG_PREFIX /usr/local 00:13:44.295 #undef SPDK_CONFIG_RAID5F 00:13:44.295 #undef SPDK_CONFIG_RBD 00:13:44.295 #define SPDK_CONFIG_RDMA 1 00:13:44.295 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:44.295 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:44.295 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:44.295 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:44.295 #define SPDK_CONFIG_SHARED 1 00:13:44.295 #undef SPDK_CONFIG_SMA 00:13:44.295 #define SPDK_CONFIG_TESTS 1 00:13:44.295 #undef SPDK_CONFIG_TSAN 00:13:44.295 #define SPDK_CONFIG_UBLK 1 00:13:44.295 #define SPDK_CONFIG_UBSAN 1 00:13:44.295 #undef SPDK_CONFIG_UNIT_TESTS 00:13:44.295 #undef SPDK_CONFIG_URING 00:13:44.295 #define SPDK_CONFIG_URING_PATH 00:13:44.295 #undef SPDK_CONFIG_URING_ZNS 00:13:44.295 #undef SPDK_CONFIG_USDT 00:13:44.295 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:44.295 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:44.295 #define SPDK_CONFIG_VFIO_USER 1 00:13:44.295 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:44.295 #define SPDK_CONFIG_VHOST 1 00:13:44.295 #define SPDK_CONFIG_VIRTIO 1 00:13:44.295 #undef SPDK_CONFIG_VTUNE 00:13:44.295 #define SPDK_CONFIG_VTUNE_DIR 00:13:44.295 #define SPDK_CONFIG_WERROR 1 00:13:44.295 #define SPDK_CONFIG_WPDK_DIR 00:13:44.295 #undef SPDK_CONFIG_XNVME 00:13:44.295 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:44.295 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:13:44.296 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 897550 ]] 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 897550 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.m3HDAz 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.m3HDAz/tests/target /tmp/spdk.m3HDAz 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=123540742144 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356509184 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5815767040 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64668221440 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678252544 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847947264 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871302656 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23355392 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64678055936 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=200704 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935634944 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935647232 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:13:44.297 * Looking for test storage... 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=123540742144 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8030359552 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:13:44.297 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:44.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.563 --rc genhtml_branch_coverage=1 00:13:44.563 --rc genhtml_function_coverage=1 00:13:44.563 --rc genhtml_legend=1 00:13:44.563 --rc geninfo_all_blocks=1 00:13:44.563 --rc geninfo_unexecuted_blocks=1 00:13:44.563 00:13:44.563 ' 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:44.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.563 --rc genhtml_branch_coverage=1 00:13:44.563 --rc genhtml_function_coverage=1 00:13:44.563 --rc genhtml_legend=1 00:13:44.563 --rc geninfo_all_blocks=1 00:13:44.563 --rc geninfo_unexecuted_blocks=1 00:13:44.563 00:13:44.563 ' 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:44.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.563 --rc genhtml_branch_coverage=1 00:13:44.563 --rc genhtml_function_coverage=1 00:13:44.563 --rc genhtml_legend=1 00:13:44.563 --rc geninfo_all_blocks=1 00:13:44.563 --rc geninfo_unexecuted_blocks=1 00:13:44.563 00:13:44.563 ' 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:44.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.563 --rc genhtml_branch_coverage=1 00:13:44.563 --rc genhtml_function_coverage=1 00:13:44.563 --rc genhtml_legend=1 00:13:44.563 --rc geninfo_all_blocks=1 00:13:44.563 --rc geninfo_unexecuted_blocks=1 00:13:44.563 00:13:44.563 ' 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.563 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:44.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:13:44.564 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:52.710 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:52.710 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:52.710 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.710 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:52.711 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:52.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:13:52.711 00:13:52.711 --- 10.0.0.2 ping statistics --- 00:13:52.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.711 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:52.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:13:52.711 00:13:52.711 --- 10.0.0.1 ping statistics --- 00:13:52.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.711 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:52.711 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:52.711 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:52.711 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:52.711 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.711 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.711 ************************************ 00:13:52.711 START TEST nvmf_filesystem_no_in_capsule 00:13:52.711 ************************************ 00:13:52.711 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:13:52.711 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:52.711 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:52.711 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:52.711 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:52.711 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:52.711 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=901373 00:13:52.711 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 901373 00:13:52.711 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:52.711 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 901373 ']' 00:13:52.711 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.711 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.711 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.711 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.711 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:52.711 [2024-11-19 10:41:31.156358] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:13:52.711 [2024-11-19 10:41:31.156420] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.711 [2024-11-19 10:41:31.254809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:52.711 [2024-11-19 10:41:31.309139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.711 [2024-11-19 10:41:31.309201] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.711 [2024-11-19 10:41:31.309210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.711 [2024-11-19 10:41:31.309218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.712 [2024-11-19 10:41:31.309224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.712 [2024-11-19 10:41:31.311221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.712 [2024-11-19 10:41:31.311340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.712 [2024-11-19 10:41:31.311490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.712 [2024-11-19 10:41:31.311491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.973 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.973 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:52.973 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:52.973 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:52.973 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:52.973 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.973 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:52.973 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:52.973 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.973 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:52.973 [2024-11-19 10:41:32.027396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.973 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.973 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:52.973 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.973 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:52.973 Malloc1 00:13:52.973 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.973 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:52.973 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.973 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:52.973 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.974 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:52.974 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.974 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:53.234 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.234 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.234 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.234 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:53.234 [2024-11-19 10:41:32.185106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.234 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.234 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:53.234 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:53.234 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:53.234 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:53.234 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:53.234 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:53.234 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.235 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:53.235 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.235 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:53.235 { 00:13:53.235 "name": "Malloc1", 00:13:53.235 "aliases": [ 00:13:53.235 "a851f788-3246-4468-93f4-74a2ae1a40c7" 00:13:53.235 ], 00:13:53.235 "product_name": "Malloc disk", 00:13:53.235 "block_size": 512, 00:13:53.235 "num_blocks": 1048576, 00:13:53.235 "uuid": "a851f788-3246-4468-93f4-74a2ae1a40c7", 00:13:53.235 "assigned_rate_limits": { 00:13:53.235 "rw_ios_per_sec": 0, 00:13:53.235 "rw_mbytes_per_sec": 0, 00:13:53.235 "r_mbytes_per_sec": 0, 00:13:53.235 "w_mbytes_per_sec": 0 00:13:53.235 }, 00:13:53.235 "claimed": true, 00:13:53.235 "claim_type": "exclusive_write", 00:13:53.235 "zoned": false, 00:13:53.235 "supported_io_types": { 00:13:53.235 "read": true, 00:13:53.235 "write": true, 00:13:53.235 "unmap": true, 00:13:53.235 "flush": true, 00:13:53.235 "reset": true, 00:13:53.235 "nvme_admin": false, 00:13:53.235 "nvme_io": false, 00:13:53.235 "nvme_io_md": false, 00:13:53.235 "write_zeroes": true, 00:13:53.235 "zcopy": true, 00:13:53.235 "get_zone_info": false, 00:13:53.235 "zone_management": false, 00:13:53.235 "zone_append": false, 00:13:53.235 "compare": false, 00:13:53.235 "compare_and_write": false, 00:13:53.235 "abort": true, 00:13:53.235 "seek_hole": false, 00:13:53.235 "seek_data": false, 00:13:53.235 "copy": true, 00:13:53.235 "nvme_iov_md": false 00:13:53.235 }, 00:13:53.235 "memory_domains": [ 00:13:53.235 { 00:13:53.235 "dma_device_id": "system", 00:13:53.235 "dma_device_type": 1 00:13:53.235 }, 00:13:53.235 { 00:13:53.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.235 "dma_device_type": 2 00:13:53.235 } 00:13:53.235 ], 00:13:53.235 "driver_specific": {} 00:13:53.235 } 00:13:53.235 ]' 00:13:53.235 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:53.235 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:53.235 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:53.235 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:53.235 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:53.235 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:53.235 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:53.235 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:55.150 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:55.150 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:55.150 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:55.150 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:55.150 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:57.066 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:57.066 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:57.066 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:57.066 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:57.066 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:57.066 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:57.066 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:57.066 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:57.066 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:57.066 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:57.066 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:57.066 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:57.066 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:57.066 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:57.066 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:57.066 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:57.066 10:41:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:57.066 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:57.328 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:58.713 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:58.713 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:58.713 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:58.713 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.713 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:58.713 ************************************ 00:13:58.713 START TEST filesystem_ext4 00:13:58.713 ************************************ 00:13:58.713 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:58.713 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:58.713 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:58.713 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:58.713 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:58.713 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:58.713 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:58.714 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:58.714 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:58.714 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:58.714 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:58.714 mke2fs 1.47.0 (5-Feb-2023) 00:13:58.714 Discarding device blocks: 0/522240 done 00:13:58.714 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:58.714 Filesystem UUID: 2e7ad89a-43b9-4c3f-9168-4101ef40e7b6 00:13:58.714 Superblock backups stored on blocks: 00:13:58.714 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:58.714 00:13:58.714 Allocating group tables: 0/64 done 00:13:58.714 Writing inode tables: 0/64 done 00:14:01.256 Creating journal (8192 blocks): done 00:14:03.735 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:14:03.735 00:14:03.735 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:14:03.735 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:10.321 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:10.321 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:14:10.321 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:10.321 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:14:10.321 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:10.321 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:10.321 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 901373 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:10.322 00:14:10.322 real 0m11.361s 00:14:10.322 user 0m0.021s 00:14:10.322 sys 0m0.089s 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:10.322 ************************************ 00:14:10.322 END TEST filesystem_ext4 00:14:10.322 ************************************ 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:10.322 ************************************ 00:14:10.322 START TEST filesystem_btrfs 00:14:10.322 ************************************ 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:14:10.322 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:10.322 btrfs-progs v6.8.1 00:14:10.322 See https://btrfs.readthedocs.io for more information. 00:14:10.322 00:14:10.322 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:10.322 NOTE: several default settings have changed in version 5.15, please make sure 00:14:10.322 this does not affect your deployments: 00:14:10.322 - DUP for metadata (-m dup) 00:14:10.322 - enabled no-holes (-O no-holes) 00:14:10.322 - enabled free-space-tree (-R free-space-tree) 00:14:10.322 00:14:10.322 Label: (null) 00:14:10.322 UUID: 32160176-f33c-4dce-968d-77adde9df223 00:14:10.322 Node size: 16384 00:14:10.322 Sector size: 4096 (CPU page size: 4096) 00:14:10.322 Filesystem size: 510.00MiB 00:14:10.322 Block group profiles: 00:14:10.322 Data: single 8.00MiB 00:14:10.322 Metadata: DUP 32.00MiB 00:14:10.322 System: DUP 8.00MiB 00:14:10.322 SSD detected: yes 00:14:10.322 Zoned device: no 00:14:10.322 Features: extref, skinny-metadata, no-holes, free-space-tree 00:14:10.322 Checksum: crc32c 00:14:10.322 Number of devices: 1 00:14:10.322 Devices: 00:14:10.322 ID SIZE PATH 00:14:10.322 1 510.00MiB /dev/nvme0n1p1 00:14:10.322 00:14:10.322 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:14:10.322 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:10.583 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:10.583 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:14:10.583 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:10.583 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:14:10.583 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:10.583 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:10.583 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 901373 00:14:10.583 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:10.583 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:10.583 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:10.583 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:10.583 00:14:10.583 real 0m0.735s 00:14:10.583 user 0m0.029s 00:14:10.583 sys 0m0.123s 00:14:10.583 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.583 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:10.583 ************************************ 00:14:10.583 END TEST filesystem_btrfs 00:14:10.583 ************************************ 00:14:10.583 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:10.583 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:10.583 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.583 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:10.844 ************************************ 00:14:10.844 START TEST filesystem_xfs 00:14:10.844 ************************************ 00:14:10.844 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:14:10.844 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:10.844 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:10.844 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:10.844 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:14:10.844 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:10.844 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:14:10.844 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:14:10.844 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:14:10.844 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:14:10.844 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:10.844 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:10.844 = sectsz=512 attr=2, projid32bit=1 00:14:10.844 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:10.844 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:10.844 data = bsize=4096 blocks=130560, imaxpct=25 00:14:10.844 = sunit=0 swidth=0 blks 00:14:10.844 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:10.844 log =internal log bsize=4096 blocks=16384, version=2 00:14:10.844 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:10.844 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:11.785 Discarding blocks...Done. 00:14:11.785 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:14:11.785 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:13.698 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:13.698 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:14:13.698 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:13.698 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:14:13.698 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:14:13.698 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:13.698 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 901373 00:14:13.698 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:13.698 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:13.698 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:13.698 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:13.698 00:14:13.698 real 0m2.811s 00:14:13.698 user 0m0.026s 00:14:13.698 sys 0m0.078s 00:14:13.698 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.698 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:13.698 ************************************ 00:14:13.698 END TEST filesystem_xfs 00:14:13.698 ************************************ 00:14:13.698 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:13.959 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:13.959 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:13.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.959 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 901373 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 901373 ']' 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 901373 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 901373 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 901373' 00:14:13.960 killing process with pid 901373 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 901373 00:14:13.960 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 901373 00:14:14.221 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:14.221 00:14:14.221 real 0m22.243s 00:14:14.221 user 1m27.990s 00:14:14.221 sys 0m1.473s 00:14:14.221 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.221 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:14.221 ************************************ 00:14:14.221 END TEST nvmf_filesystem_no_in_capsule 00:14:14.221 ************************************ 00:14:14.221 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:14:14.221 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:14.221 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.221 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.221 ************************************ 00:14:14.221 START TEST nvmf_filesystem_in_capsule 00:14:14.221 ************************************ 00:14:14.221 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:14:14.221 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:14:14.221 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:14.221 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:14.221 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:14.221 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:14.482 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=905844 00:14:14.482 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 905844 00:14:14.482 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:14.482 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 905844 ']' 00:14:14.482 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.482 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.482 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.482 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.482 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:14.482 [2024-11-19 10:41:53.487033] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:14:14.482 [2024-11-19 10:41:53.487114] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.482 [2024-11-19 10:41:53.582052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:14.482 [2024-11-19 10:41:53.617669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.482 [2024-11-19 10:41:53.617700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.482 [2024-11-19 10:41:53.617706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.482 [2024-11-19 10:41:53.617711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.482 [2024-11-19 10:41:53.617715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.482 [2024-11-19 10:41:53.619041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.482 [2024-11-19 10:41:53.619206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.482 [2024-11-19 10:41:53.619282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.482 [2024-11-19 10:41:53.619283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:15.425 [2024-11-19 10:41:54.325997] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:15.425 Malloc1 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:15.425 [2024-11-19 10:41:54.454075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:15.425 { 00:14:15.425 "name": "Malloc1", 00:14:15.425 "aliases": [ 00:14:15.425 "3e6d9a95-f3eb-4787-bc98-a048186ea559" 00:14:15.425 ], 00:14:15.425 "product_name": "Malloc disk", 00:14:15.425 "block_size": 512, 00:14:15.425 "num_blocks": 1048576, 00:14:15.425 "uuid": "3e6d9a95-f3eb-4787-bc98-a048186ea559", 00:14:15.425 "assigned_rate_limits": { 00:14:15.425 "rw_ios_per_sec": 0, 00:14:15.425 "rw_mbytes_per_sec": 0, 00:14:15.425 "r_mbytes_per_sec": 0, 00:14:15.425 "w_mbytes_per_sec": 0 00:14:15.425 }, 00:14:15.425 "claimed": true, 00:14:15.425 "claim_type": "exclusive_write", 00:14:15.425 "zoned": false, 00:14:15.425 "supported_io_types": { 00:14:15.425 "read": true, 00:14:15.425 "write": true, 00:14:15.425 "unmap": true, 00:14:15.425 "flush": true, 00:14:15.425 "reset": true, 00:14:15.425 "nvme_admin": false, 00:14:15.425 "nvme_io": false, 00:14:15.425 "nvme_io_md": false, 00:14:15.425 "write_zeroes": true, 00:14:15.425 "zcopy": true, 00:14:15.425 "get_zone_info": false, 00:14:15.425 "zone_management": false, 00:14:15.425 "zone_append": false, 00:14:15.425 "compare": false, 00:14:15.425 "compare_and_write": false, 00:14:15.425 "abort": true, 00:14:15.425 "seek_hole": false, 00:14:15.425 "seek_data": false, 00:14:15.425 "copy": true, 00:14:15.425 "nvme_iov_md": false 00:14:15.425 }, 00:14:15.425 "memory_domains": [ 00:14:15.425 { 00:14:15.425 "dma_device_id": "system", 00:14:15.425 "dma_device_type": 1 00:14:15.425 }, 00:14:15.425 { 00:14:15.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.425 "dma_device_type": 2 00:14:15.425 } 00:14:15.425 ], 00:14:15.425 "driver_specific": {} 00:14:15.425 } 00:14:15.425 ]' 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:15.425 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:17.342 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:17.342 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:14:17.342 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:17.342 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:17.342 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:14:19.258 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:19.258 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:19.258 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:19.258 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:19.258 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:19.258 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:14:19.258 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:19.258 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:19.258 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:19.258 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:19.258 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:19.258 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:19.258 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:19.258 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:19.258 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:19.258 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:19.258 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:19.258 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:19.830 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:20.772 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:14:20.772 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:20.772 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:20.772 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:20.772 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:20.772 ************************************ 00:14:20.772 START TEST filesystem_in_capsule_ext4 00:14:20.772 ************************************ 00:14:20.772 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:20.772 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:20.772 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:20.772 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:20.772 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:14:20.772 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:20.772 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:14:20.772 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:14:20.772 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:14:20.772 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:14:20.772 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:20.772 mke2fs 1.47.0 (5-Feb-2023) 00:14:20.772 Discarding device blocks: 0/522240 done 00:14:20.772 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:20.772 Filesystem UUID: b089e42d-c29c-4211-99fe-0bfac540b5f7 00:14:20.772 Superblock backups stored on blocks: 00:14:20.772 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:20.772 00:14:20.772 Allocating group tables: 0/64 done 00:14:20.772 Writing inode tables: 0/64 done 00:14:21.344 Creating journal (8192 blocks): done 00:14:23.560 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:14:23.560 00:14:23.560 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:14:23.560 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:30.143 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:30.143 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:14:30.143 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:30.143 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:14:30.143 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:30.143 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:30.143 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 905844 00:14:30.143 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:30.143 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:30.143 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:30.144 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:30.144 00:14:30.144 real 0m8.547s 00:14:30.144 user 0m0.039s 00:14:30.144 sys 0m0.068s 00:14:30.144 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.144 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:30.144 ************************************ 00:14:30.144 END TEST filesystem_in_capsule_ext4 00:14:30.144 ************************************ 00:14:30.144 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:30.144 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:30.144 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.144 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.144 ************************************ 00:14:30.144 START TEST filesystem_in_capsule_btrfs 00:14:30.144 ************************************ 00:14:30.144 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:30.144 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:30.144 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:30.144 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:30.144 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:14:30.144 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:30.144 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:14:30.144 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:14:30.144 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:14:30.144 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:14:30.144 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:30.144 btrfs-progs v6.8.1 00:14:30.144 See https://btrfs.readthedocs.io for more information. 00:14:30.144 00:14:30.144 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:30.144 NOTE: several default settings have changed in version 5.15, please make sure 00:14:30.144 this does not affect your deployments: 00:14:30.144 - DUP for metadata (-m dup) 00:14:30.144 - enabled no-holes (-O no-holes) 00:14:30.145 - enabled free-space-tree (-R free-space-tree) 00:14:30.145 00:14:30.145 Label: (null) 00:14:30.145 UUID: da3ce257-20ac-4028-b24c-4fc2e7758dec 00:14:30.145 Node size: 16384 00:14:30.145 Sector size: 4096 (CPU page size: 4096) 00:14:30.145 Filesystem size: 510.00MiB 00:14:30.145 Block group profiles: 00:14:30.145 Data: single 8.00MiB 00:14:30.145 Metadata: DUP 32.00MiB 00:14:30.145 System: DUP 8.00MiB 00:14:30.145 SSD detected: yes 00:14:30.145 Zoned device: no 00:14:30.145 Features: extref, skinny-metadata, no-holes, free-space-tree 00:14:30.145 Checksum: crc32c 00:14:30.145 Number of devices: 1 00:14:30.145 Devices: 00:14:30.145 ID SIZE PATH 00:14:30.145 1 510.00MiB /dev/nvme0n1p1 00:14:30.145 00:14:30.145 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:14:30.145 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:30.145 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:30.145 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:14:30.145 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:30.145 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:14:30.145 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:30.145 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:30.145 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 905844 00:14:30.145 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:30.145 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:30.145 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:30.145 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:30.145 00:14:30.145 real 0m0.641s 00:14:30.145 user 0m0.031s 00:14:30.145 sys 0m0.115s 00:14:30.145 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.145 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:30.146 ************************************ 00:14:30.146 END TEST filesystem_in_capsule_btrfs 00:14:30.146 ************************************ 00:14:30.146 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:14:30.146 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:30.146 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.146 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.146 ************************************ 00:14:30.146 START TEST filesystem_in_capsule_xfs 00:14:30.146 ************************************ 00:14:30.146 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:14:30.146 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:30.146 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:30.146 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:30.146 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:14:30.146 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:30.146 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:14:30.146 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:14:30.146 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:14:30.146 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:14:30.146 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:30.146 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:30.146 = sectsz=512 attr=2, projid32bit=1 00:14:30.146 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:30.146 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:30.146 data = bsize=4096 blocks=130560, imaxpct=25 00:14:30.146 = sunit=0 swidth=0 blks 00:14:30.146 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:30.146 log =internal log bsize=4096 blocks=16384, version=2 00:14:30.146 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:30.147 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:31.089 Discarding blocks...Done. 00:14:31.089 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:14:31.089 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:33.003 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:33.003 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:14:33.003 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:33.003 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:14:33.003 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:14:33.003 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:33.003 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 905844 00:14:33.003 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:33.003 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:33.003 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:33.003 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:33.003 00:14:33.003 real 0m2.998s 00:14:33.003 user 0m0.023s 00:14:33.003 sys 0m0.084s 00:14:33.003 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.003 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:33.003 ************************************ 00:14:33.003 END TEST filesystem_in_capsule_xfs 00:14:33.003 ************************************ 00:14:33.003 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:33.264 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:33.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 905844 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 905844 ']' 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 905844 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 905844 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 905844' 00:14:33.525 killing process with pid 905844 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 905844 00:14:33.525 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 905844 00:14:33.787 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:33.787 00:14:33.787 real 0m19.478s 00:14:33.787 user 1m17.000s 00:14:33.787 sys 0m1.446s 00:14:33.787 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.787 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:33.787 ************************************ 00:14:33.787 END TEST nvmf_filesystem_in_capsule 00:14:33.787 ************************************ 00:14:33.787 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:14:33.787 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:33.787 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:14:33.787 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:33.787 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:14:33.787 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:33.787 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:33.787 rmmod nvme_tcp 00:14:33.787 rmmod nvme_fabrics 00:14:33.787 rmmod nvme_keyring 00:14:34.048 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:34.048 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:14:34.048 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:14:34.048 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:14:34.048 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:34.048 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:34.048 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:34.048 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:14:34.048 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:14:34.048 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:34.048 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:14:34.048 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:34.048 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:34.048 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.048 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.048 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.961 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:35.961 00:14:35.961 real 0m52.039s 00:14:35.961 user 2m47.420s 00:14:35.961 sys 0m8.784s 00:14:35.961 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.961 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:35.961 ************************************ 00:14:35.961 END TEST nvmf_filesystem 00:14:35.961 ************************************ 00:14:35.961 10:42:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:35.961 10:42:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:35.961 10:42:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.961 10:42:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:36.223 ************************************ 00:14:36.223 START TEST nvmf_target_discovery 00:14:36.223 ************************************ 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:36.223 * Looking for test storage... 00:14:36.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:36.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.223 --rc genhtml_branch_coverage=1 00:14:36.223 --rc genhtml_function_coverage=1 00:14:36.223 --rc genhtml_legend=1 00:14:36.223 --rc geninfo_all_blocks=1 00:14:36.223 --rc geninfo_unexecuted_blocks=1 00:14:36.223 00:14:36.223 ' 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:36.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.223 --rc genhtml_branch_coverage=1 00:14:36.223 --rc genhtml_function_coverage=1 00:14:36.223 --rc genhtml_legend=1 00:14:36.223 --rc geninfo_all_blocks=1 00:14:36.223 --rc geninfo_unexecuted_blocks=1 00:14:36.223 00:14:36.223 ' 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:36.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.223 --rc genhtml_branch_coverage=1 00:14:36.223 --rc genhtml_function_coverage=1 00:14:36.223 --rc genhtml_legend=1 00:14:36.223 --rc geninfo_all_blocks=1 00:14:36.223 --rc geninfo_unexecuted_blocks=1 00:14:36.223 00:14:36.223 ' 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:36.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.223 --rc genhtml_branch_coverage=1 00:14:36.223 --rc genhtml_function_coverage=1 00:14:36.223 --rc genhtml_legend=1 00:14:36.223 --rc geninfo_all_blocks=1 00:14:36.223 --rc geninfo_unexecuted_blocks=1 00:14:36.223 00:14:36.223 ' 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.223 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:36.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:14:36.224 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:44.573 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:44.573 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:44.573 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:44.573 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:44.573 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:44.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:14:44.574 00:14:44.574 --- 10.0.0.2 ping statistics --- 00:14:44.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.574 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:44.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:14:44.574 00:14:44.574 --- 10.0.0.1 ping statistics --- 00:14:44.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.574 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=914612 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 914612 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 914612 ']' 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.574 10:42:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.574 [2024-11-19 10:42:22.964492] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:14:44.574 [2024-11-19 10:42:22.964555] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.574 [2024-11-19 10:42:23.064755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:44.574 [2024-11-19 10:42:23.117807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.574 [2024-11-19 10:42:23.117859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.574 [2024-11-19 10:42:23.117868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.574 [2024-11-19 10:42:23.117875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.574 [2024-11-19 10:42:23.117882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.574 [2024-11-19 10:42:23.119911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.574 [2024-11-19 10:42:23.120070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.574 [2024-11-19 10:42:23.120231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:44.574 [2024-11-19 10:42:23.120267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.836 [2024-11-19 10:42:23.844046] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.836 Null1 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.836 [2024-11-19 10:42:23.904558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.836 Null2 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.836 Null3 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.836 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.836 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.836 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:44.836 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:44.836 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.836 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.836 Null4 00:14:44.836 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.836 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:44.836 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.836 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.097 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.097 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:45.097 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.097 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.097 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.097 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:45.097 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.097 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.097 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.097 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:45.097 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.097 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.097 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.097 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:45.097 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.097 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.097 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.097 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:45.359 00:14:45.359 Discovery Log Number of Records 6, Generation counter 6 00:14:45.359 =====Discovery Log Entry 0====== 00:14:45.359 trtype: tcp 00:14:45.359 adrfam: ipv4 00:14:45.359 subtype: current discovery subsystem 00:14:45.359 treq: not required 00:14:45.359 portid: 0 00:14:45.359 trsvcid: 4420 00:14:45.359 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:45.359 traddr: 10.0.0.2 00:14:45.359 eflags: explicit discovery connections, duplicate discovery information 00:14:45.359 sectype: none 00:14:45.359 =====Discovery Log Entry 1====== 00:14:45.359 trtype: tcp 00:14:45.359 adrfam: ipv4 00:14:45.359 subtype: nvme subsystem 00:14:45.359 treq: not required 00:14:45.359 portid: 0 00:14:45.359 trsvcid: 4420 00:14:45.359 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:45.359 traddr: 10.0.0.2 00:14:45.359 eflags: none 00:14:45.359 sectype: none 00:14:45.359 =====Discovery Log Entry 2====== 00:14:45.359 trtype: tcp 00:14:45.359 adrfam: ipv4 00:14:45.359 subtype: nvme subsystem 00:14:45.359 treq: not required 00:14:45.359 portid: 0 00:14:45.359 trsvcid: 4420 00:14:45.359 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:45.359 traddr: 10.0.0.2 00:14:45.359 eflags: none 00:14:45.359 sectype: none 00:14:45.359 =====Discovery Log Entry 3====== 00:14:45.359 trtype: tcp 00:14:45.359 adrfam: ipv4 00:14:45.359 subtype: nvme subsystem 00:14:45.359 treq: not required 00:14:45.359 portid: 0 00:14:45.359 trsvcid: 4420 00:14:45.359 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:45.359 traddr: 10.0.0.2 00:14:45.359 eflags: none 00:14:45.359 sectype: none 00:14:45.359 =====Discovery Log Entry 4====== 00:14:45.359 trtype: tcp 00:14:45.359 adrfam: ipv4 00:14:45.360 subtype: nvme subsystem 00:14:45.360 treq: not required 00:14:45.360 portid: 0 00:14:45.360 trsvcid: 4420 00:14:45.360 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:45.360 traddr: 10.0.0.2 00:14:45.360 eflags: none 00:14:45.360 sectype: none 00:14:45.360 =====Discovery Log Entry 5====== 00:14:45.360 trtype: tcp 00:14:45.360 adrfam: ipv4 00:14:45.360 subtype: discovery subsystem referral 00:14:45.360 treq: not required 00:14:45.360 portid: 0 00:14:45.360 trsvcid: 4430 00:14:45.360 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:45.360 traddr: 10.0.0.2 00:14:45.360 eflags: none 00:14:45.360 sectype: none 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:45.360 Perform nvmf subsystem discovery via RPC 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.360 [ 00:14:45.360 { 00:14:45.360 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:45.360 "subtype": "Discovery", 00:14:45.360 "listen_addresses": [ 00:14:45.360 { 00:14:45.360 "trtype": "TCP", 00:14:45.360 "adrfam": "IPv4", 00:14:45.360 "traddr": "10.0.0.2", 00:14:45.360 "trsvcid": "4420" 00:14:45.360 } 00:14:45.360 ], 00:14:45.360 "allow_any_host": true, 00:14:45.360 "hosts": [] 00:14:45.360 }, 00:14:45.360 { 00:14:45.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.360 "subtype": "NVMe", 00:14:45.360 "listen_addresses": [ 00:14:45.360 { 00:14:45.360 "trtype": "TCP", 00:14:45.360 "adrfam": "IPv4", 00:14:45.360 "traddr": "10.0.0.2", 00:14:45.360 "trsvcid": "4420" 00:14:45.360 } 00:14:45.360 ], 00:14:45.360 "allow_any_host": true, 00:14:45.360 "hosts": [], 00:14:45.360 "serial_number": "SPDK00000000000001", 00:14:45.360 "model_number": "SPDK bdev Controller", 00:14:45.360 "max_namespaces": 32, 00:14:45.360 "min_cntlid": 1, 00:14:45.360 "max_cntlid": 65519, 00:14:45.360 "namespaces": [ 00:14:45.360 { 00:14:45.360 "nsid": 1, 00:14:45.360 "bdev_name": "Null1", 00:14:45.360 "name": "Null1", 00:14:45.360 "nguid": "4A25576E9D3143509CBB336B24C4E8B0", 00:14:45.360 "uuid": "4a25576e-9d31-4350-9cbb-336b24c4e8b0" 00:14:45.360 } 00:14:45.360 ] 00:14:45.360 }, 00:14:45.360 { 00:14:45.360 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:45.360 "subtype": "NVMe", 00:14:45.360 "listen_addresses": [ 00:14:45.360 { 00:14:45.360 "trtype": "TCP", 00:14:45.360 "adrfam": "IPv4", 00:14:45.360 "traddr": "10.0.0.2", 00:14:45.360 "trsvcid": "4420" 00:14:45.360 } 00:14:45.360 ], 00:14:45.360 "allow_any_host": true, 00:14:45.360 "hosts": [], 00:14:45.360 "serial_number": "SPDK00000000000002", 00:14:45.360 "model_number": "SPDK bdev Controller", 00:14:45.360 "max_namespaces": 32, 00:14:45.360 "min_cntlid": 1, 00:14:45.360 "max_cntlid": 65519, 00:14:45.360 "namespaces": [ 00:14:45.360 { 00:14:45.360 "nsid": 1, 00:14:45.360 "bdev_name": "Null2", 00:14:45.360 "name": "Null2", 00:14:45.360 "nguid": "87C0E11A659844528EA1F243A83DA828", 00:14:45.360 "uuid": "87c0e11a-6598-4452-8ea1-f243a83da828" 00:14:45.360 } 00:14:45.360 ] 00:14:45.360 }, 00:14:45.360 { 00:14:45.360 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:45.360 "subtype": "NVMe", 00:14:45.360 "listen_addresses": [ 00:14:45.360 { 00:14:45.360 "trtype": "TCP", 00:14:45.360 "adrfam": "IPv4", 00:14:45.360 "traddr": "10.0.0.2", 00:14:45.360 "trsvcid": "4420" 00:14:45.360 } 00:14:45.360 ], 00:14:45.360 "allow_any_host": true, 00:14:45.360 "hosts": [], 00:14:45.360 "serial_number": "SPDK00000000000003", 00:14:45.360 "model_number": "SPDK bdev Controller", 00:14:45.360 "max_namespaces": 32, 00:14:45.360 "min_cntlid": 1, 00:14:45.360 "max_cntlid": 65519, 00:14:45.360 "namespaces": [ 00:14:45.360 { 00:14:45.360 "nsid": 1, 00:14:45.360 "bdev_name": "Null3", 00:14:45.360 "name": "Null3", 00:14:45.360 "nguid": "B326F4B18D944B57AE5D4D68FADDC816", 00:14:45.360 "uuid": "b326f4b1-8d94-4b57-ae5d-4d68faddc816" 00:14:45.360 } 00:14:45.360 ] 00:14:45.360 }, 00:14:45.360 { 00:14:45.360 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:45.360 "subtype": "NVMe", 00:14:45.360 "listen_addresses": [ 00:14:45.360 { 00:14:45.360 "trtype": "TCP", 00:14:45.360 "adrfam": "IPv4", 00:14:45.360 "traddr": "10.0.0.2", 00:14:45.360 "trsvcid": "4420" 00:14:45.360 } 00:14:45.360 ], 00:14:45.360 "allow_any_host": true, 00:14:45.360 "hosts": [], 00:14:45.360 "serial_number": "SPDK00000000000004", 00:14:45.360 "model_number": "SPDK bdev Controller", 00:14:45.360 "max_namespaces": 32, 00:14:45.360 "min_cntlid": 1, 00:14:45.360 "max_cntlid": 65519, 00:14:45.360 "namespaces": [ 00:14:45.360 { 00:14:45.360 "nsid": 1, 00:14:45.360 "bdev_name": "Null4", 00:14:45.360 "name": "Null4", 00:14:45.360 "nguid": "46841BA6F9CD40EBA38B2D37169CA6EB", 00:14:45.360 "uuid": "46841ba6-f9cd-40eb-a38b-2d37169ca6eb" 00:14:45.360 } 00:14:45.360 ] 00:14:45.360 } 00:14:45.360 ] 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.360 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:45.361 rmmod nvme_tcp 00:14:45.361 rmmod nvme_fabrics 00:14:45.361 rmmod nvme_keyring 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 914612 ']' 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 914612 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 914612 ']' 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 914612 00:14:45.361 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:14:45.622 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.622 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 914612 00:14:45.622 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:45.622 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:45.622 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 914612' 00:14:45.622 killing process with pid 914612 00:14:45.622 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 914612 00:14:45.622 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 914612 00:14:45.622 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:45.622 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:45.622 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:45.622 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:14:45.622 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:14:45.622 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:45.622 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:14:45.622 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:45.622 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:45.622 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.622 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.622 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.169 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:48.169 00:14:48.169 real 0m11.703s 00:14:48.169 user 0m8.978s 00:14:48.169 sys 0m6.156s 00:14:48.169 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.169 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:48.169 ************************************ 00:14:48.169 END TEST nvmf_target_discovery 00:14:48.169 ************************************ 00:14:48.169 10:42:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:48.169 10:42:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:48.169 10:42:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.169 10:42:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:48.169 ************************************ 00:14:48.169 START TEST nvmf_referrals 00:14:48.169 ************************************ 00:14:48.169 10:42:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:48.169 * Looking for test storage... 00:14:48.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:48.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.169 --rc genhtml_branch_coverage=1 00:14:48.169 --rc genhtml_function_coverage=1 00:14:48.169 --rc genhtml_legend=1 00:14:48.169 --rc geninfo_all_blocks=1 00:14:48.169 --rc geninfo_unexecuted_blocks=1 00:14:48.169 00:14:48.169 ' 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:48.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.169 --rc genhtml_branch_coverage=1 00:14:48.169 --rc genhtml_function_coverage=1 00:14:48.169 --rc genhtml_legend=1 00:14:48.169 --rc geninfo_all_blocks=1 00:14:48.169 --rc geninfo_unexecuted_blocks=1 00:14:48.169 00:14:48.169 ' 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:48.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.169 --rc genhtml_branch_coverage=1 00:14:48.169 --rc genhtml_function_coverage=1 00:14:48.169 --rc genhtml_legend=1 00:14:48.169 --rc geninfo_all_blocks=1 00:14:48.169 --rc geninfo_unexecuted_blocks=1 00:14:48.169 00:14:48.169 ' 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:48.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.169 --rc genhtml_branch_coverage=1 00:14:48.169 --rc genhtml_function_coverage=1 00:14:48.169 --rc genhtml_legend=1 00:14:48.169 --rc geninfo_all_blocks=1 00:14:48.169 --rc geninfo_unexecuted_blocks=1 00:14:48.169 00:14:48.169 ' 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.169 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:48.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:14:48.170 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:56.322 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:56.322 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:56.322 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:56.322 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:56.323 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:56.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:14:56.323 00:14:56.323 --- 10.0.0.2 ping statistics --- 00:14:56.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.323 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:56.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:14:56.323 00:14:56.323 --- 10.0.0.1 ping statistics --- 00:14:56.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.323 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=919318 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 919318 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 919318 ']' 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.323 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:56.323 [2024-11-19 10:42:34.776122] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:14:56.323 [2024-11-19 10:42:34.776200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.323 [2024-11-19 10:42:34.877039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.323 [2024-11-19 10:42:34.930351] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.323 [2024-11-19 10:42:34.930406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.323 [2024-11-19 10:42:34.930414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.323 [2024-11-19 10:42:34.930422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.323 [2024-11-19 10:42:34.930428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.323 [2024-11-19 10:42:34.932815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.323 [2024-11-19 10:42:34.932974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.323 [2024-11-19 10:42:34.933100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:56.323 [2024-11-19 10:42:34.933101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:56.585 [2024-11-19 10:42:35.653128] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:56.585 [2024-11-19 10:42:35.669489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.585 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:56.586 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.586 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:56.586 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:56.586 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.586 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:56.586 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.586 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:56.586 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:56.586 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:56.586 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:56.586 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:56.586 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.586 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:56.586 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:56.586 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.847 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:56.847 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:56.847 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:56.847 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:56.847 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:56.847 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:56.847 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:56.847 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:56.847 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:56.847 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:56.847 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:56.847 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.847 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:57.107 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:57.368 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:57.629 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:57.629 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:57.629 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:57.629 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:57.629 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:57.629 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:57.629 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:57.629 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:57.629 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:57.629 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:57.629 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:57.629 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:57.629 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:57.890 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:57.890 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:57.890 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.890 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.890 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.890 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:57.890 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:57.890 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:57.890 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:57.890 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.890 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.890 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:57.890 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.151 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:58.151 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:58.151 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:58.151 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:58.151 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:58.151 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.151 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:58.151 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:58.151 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:58.151 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:58.151 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:58.151 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:58.151 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:58.151 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.151 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:58.412 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:58.412 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:58.412 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:58.412 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:58.412 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.412 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:58.673 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:58.673 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:58.673 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.673 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:58.673 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.673 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:58.673 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:58.673 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.673 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:58.673 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.673 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:58.673 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:58.673 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:58.673 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:58.673 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.673 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:58.673 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:58.934 rmmod nvme_tcp 00:14:58.934 rmmod nvme_fabrics 00:14:58.934 rmmod nvme_keyring 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 919318 ']' 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 919318 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 919318 ']' 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 919318 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:58.934 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 919318 00:14:58.934 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:58.934 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:58.934 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 919318' 00:14:58.934 killing process with pid 919318 00:14:58.934 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 919318 00:14:58.934 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 919318 00:14:59.195 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:59.195 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:59.195 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:59.195 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:14:59.195 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:14:59.195 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:59.195 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:14:59.195 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:59.195 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:59.195 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.195 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.195 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.108 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:01.108 00:15:01.108 real 0m13.312s 00:15:01.108 user 0m16.160s 00:15:01.108 sys 0m6.509s 00:15:01.108 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.108 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:01.108 ************************************ 00:15:01.108 END TEST nvmf_referrals 00:15:01.108 ************************************ 00:15:01.370 10:42:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:01.370 10:42:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:01.371 ************************************ 00:15:01.371 START TEST nvmf_connect_disconnect 00:15:01.371 ************************************ 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:01.371 * Looking for test storage... 00:15:01.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:01.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.371 --rc genhtml_branch_coverage=1 00:15:01.371 --rc genhtml_function_coverage=1 00:15:01.371 --rc genhtml_legend=1 00:15:01.371 --rc geninfo_all_blocks=1 00:15:01.371 --rc geninfo_unexecuted_blocks=1 00:15:01.371 00:15:01.371 ' 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:01.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.371 --rc genhtml_branch_coverage=1 00:15:01.371 --rc genhtml_function_coverage=1 00:15:01.371 --rc genhtml_legend=1 00:15:01.371 --rc geninfo_all_blocks=1 00:15:01.371 --rc geninfo_unexecuted_blocks=1 00:15:01.371 00:15:01.371 ' 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:01.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.371 --rc genhtml_branch_coverage=1 00:15:01.371 --rc genhtml_function_coverage=1 00:15:01.371 --rc genhtml_legend=1 00:15:01.371 --rc geninfo_all_blocks=1 00:15:01.371 --rc geninfo_unexecuted_blocks=1 00:15:01.371 00:15:01.371 ' 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:01.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.371 --rc genhtml_branch_coverage=1 00:15:01.371 --rc genhtml_function_coverage=1 00:15:01.371 --rc genhtml_legend=1 00:15:01.371 --rc geninfo_all_blocks=1 00:15:01.371 --rc geninfo_unexecuted_blocks=1 00:15:01.371 00:15:01.371 ' 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.371 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:01.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:15:01.633 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:09.774 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:09.774 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:15:09.774 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:09.774 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:09.774 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:09.774 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:09.774 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:09.774 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:15:09.774 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:09.774 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:15:09.774 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:15:09.774 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:15:09.774 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:15:09.774 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:09.775 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:09.775 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:09.775 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:09.775 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:09.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:15:09.775 00:15:09.775 --- 10.0.0.2 ping statistics --- 00:15:09.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.775 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:09.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:15:09.775 00:15:09.775 --- 10.0.0.1 ping statistics --- 00:15:09.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.775 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:09.775 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.776 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:09.776 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:09.776 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:15:09.776 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:09.776 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:09.776 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=924097 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 924097 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 924097 ']' 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:09.776 [2024-11-19 10:42:48.063914] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:15:09.776 [2024-11-19 10:42:48.063980] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.776 [2024-11-19 10:42:48.165560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.776 [2024-11-19 10:42:48.219724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.776 [2024-11-19 10:42:48.219777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.776 [2024-11-19 10:42:48.219786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.776 [2024-11-19 10:42:48.219793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.776 [2024-11-19 10:42:48.219800] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.776 [2024-11-19 10:42:48.222210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.776 [2024-11-19 10:42:48.222323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.776 [2024-11-19 10:42:48.222488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.776 [2024-11-19 10:42:48.222490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:09.776 [2024-11-19 10:42:48.946519] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.776 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:10.038 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.038 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:15:10.038 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:10.038 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.038 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:10.038 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.038 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:10.038 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.038 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:10.038 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.038 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.038 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.038 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:10.038 [2024-11-19 10:42:49.024911] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.038 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.038 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:15:10.038 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:15:10.038 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:15:14.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.337 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:28.337 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:28.338 rmmod nvme_tcp 00:15:28.338 rmmod nvme_fabrics 00:15:28.338 rmmod nvme_keyring 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 924097 ']' 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 924097 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 924097 ']' 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 924097 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 924097 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 924097' 00:15:28.338 killing process with pid 924097 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 924097 00:15:28.338 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 924097 00:15:28.598 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:28.598 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:28.598 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:28.598 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:15:28.598 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:28.598 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:15:28.598 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:15:28.598 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:28.598 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:28.598 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.598 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:28.598 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.512 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:30.512 00:15:30.512 real 0m29.314s 00:15:30.512 user 1m19.194s 00:15:30.512 sys 0m7.119s 00:15:30.512 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:30.512 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:30.512 ************************************ 00:15:30.512 END TEST nvmf_connect_disconnect 00:15:30.512 ************************************ 00:15:30.512 10:43:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:30.512 10:43:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:30.512 10:43:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:30.512 10:43:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:30.773 ************************************ 00:15:30.773 START TEST nvmf_multitarget 00:15:30.773 ************************************ 00:15:30.773 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:30.773 * Looking for test storage... 00:15:30.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:30.773 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:30.773 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:15:30.773 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:30.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.774 --rc genhtml_branch_coverage=1 00:15:30.774 --rc genhtml_function_coverage=1 00:15:30.774 --rc genhtml_legend=1 00:15:30.774 --rc geninfo_all_blocks=1 00:15:30.774 --rc geninfo_unexecuted_blocks=1 00:15:30.774 00:15:30.774 ' 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:30.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.774 --rc genhtml_branch_coverage=1 00:15:30.774 --rc genhtml_function_coverage=1 00:15:30.774 --rc genhtml_legend=1 00:15:30.774 --rc geninfo_all_blocks=1 00:15:30.774 --rc geninfo_unexecuted_blocks=1 00:15:30.774 00:15:30.774 ' 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:30.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.774 --rc genhtml_branch_coverage=1 00:15:30.774 --rc genhtml_function_coverage=1 00:15:30.774 --rc genhtml_legend=1 00:15:30.774 --rc geninfo_all_blocks=1 00:15:30.774 --rc geninfo_unexecuted_blocks=1 00:15:30.774 00:15:30.774 ' 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:30.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.774 --rc genhtml_branch_coverage=1 00:15:30.774 --rc genhtml_function_coverage=1 00:15:30.774 --rc genhtml_legend=1 00:15:30.774 --rc geninfo_all_blocks=1 00:15:30.774 --rc geninfo_unexecuted_blocks=1 00:15:30.774 00:15:30.774 ' 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:30.774 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:31.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:15:31.036 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:39.182 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:39.182 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:39.182 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:39.182 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:39.182 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:39.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:15:39.183 00:15:39.183 --- 10.0.0.2 ping statistics --- 00:15:39.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.183 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:39.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:15:39.183 00:15:39.183 --- 10.0.0.1 ping statistics --- 00:15:39.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.183 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=932215 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 932215 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 932215 ']' 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.183 10:43:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:39.183 [2024-11-19 10:43:17.506209] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:15:39.183 [2024-11-19 10:43:17.506276] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.183 [2024-11-19 10:43:17.605925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:39.183 [2024-11-19 10:43:17.659465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.183 [2024-11-19 10:43:17.659520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.183 [2024-11-19 10:43:17.659529] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.183 [2024-11-19 10:43:17.659537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.183 [2024-11-19 10:43:17.659544] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.183 [2024-11-19 10:43:17.661585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.183 [2024-11-19 10:43:17.661750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.183 [2024-11-19 10:43:17.661913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.183 [2024-11-19 10:43:17.661914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.183 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:39.183 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:15:39.183 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:39.183 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:39.183 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:39.445 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.445 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:39.445 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:39.445 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:39.445 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:39.445 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:39.445 "nvmf_tgt_1" 00:15:39.445 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:39.706 "nvmf_tgt_2" 00:15:39.706 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:39.706 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:39.706 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:39.706 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:39.968 true 00:15:39.968 10:43:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:39.968 true 00:15:39.968 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:39.968 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:40.228 rmmod nvme_tcp 00:15:40.228 rmmod nvme_fabrics 00:15:40.228 rmmod nvme_keyring 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 932215 ']' 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 932215 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 932215 ']' 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 932215 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 932215 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 932215' 00:15:40.228 killing process with pid 932215 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 932215 00:15:40.228 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 932215 00:15:40.489 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:40.489 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:40.489 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:40.489 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:15:40.489 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:15:40.489 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:40.489 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:15:40.489 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:40.489 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:40.489 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.489 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.489 10:43:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.401 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:42.401 00:15:42.401 real 0m11.826s 00:15:42.401 user 0m10.297s 00:15:42.401 sys 0m6.145s 00:15:42.401 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.401 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:42.401 ************************************ 00:15:42.401 END TEST nvmf_multitarget 00:15:42.401 ************************************ 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:42.661 ************************************ 00:15:42.661 START TEST nvmf_rpc 00:15:42.661 ************************************ 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:42.661 * Looking for test storage... 00:15:42.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:42.661 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:15:42.662 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.662 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.662 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:42.662 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:15:42.662 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.662 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:15:42.662 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.662 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:42.662 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:15:42.662 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.662 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:15:42.662 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:42.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.923 --rc genhtml_branch_coverage=1 00:15:42.923 --rc genhtml_function_coverage=1 00:15:42.923 --rc genhtml_legend=1 00:15:42.923 --rc geninfo_all_blocks=1 00:15:42.923 --rc geninfo_unexecuted_blocks=1 00:15:42.923 00:15:42.923 ' 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:42.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.923 --rc genhtml_branch_coverage=1 00:15:42.923 --rc genhtml_function_coverage=1 00:15:42.923 --rc genhtml_legend=1 00:15:42.923 --rc geninfo_all_blocks=1 00:15:42.923 --rc geninfo_unexecuted_blocks=1 00:15:42.923 00:15:42.923 ' 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:42.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.923 --rc genhtml_branch_coverage=1 00:15:42.923 --rc genhtml_function_coverage=1 00:15:42.923 --rc genhtml_legend=1 00:15:42.923 --rc geninfo_all_blocks=1 00:15:42.923 --rc geninfo_unexecuted_blocks=1 00:15:42.923 00:15:42.923 ' 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:42.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.923 --rc genhtml_branch_coverage=1 00:15:42.923 --rc genhtml_function_coverage=1 00:15:42.923 --rc genhtml_legend=1 00:15:42.923 --rc geninfo_all_blocks=1 00:15:42.923 --rc geninfo_unexecuted_blocks=1 00:15:42.923 00:15:42.923 ' 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:42.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.923 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.924 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:42.924 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:42.924 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:15:42.924 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:51.062 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:51.062 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:51.062 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:51.062 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:51.062 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:51.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:15:51.063 00:15:51.063 --- 10.0.0.2 ping statistics --- 00:15:51.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.063 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:51.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:15:51.063 00:15:51.063 --- 10.0.0.1 ping statistics --- 00:15:51.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.063 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=936912 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 936912 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 936912 ']' 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:51.063 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.063 [2024-11-19 10:43:29.499505] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:15:51.063 [2024-11-19 10:43:29.499571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.063 [2024-11-19 10:43:29.598437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:51.063 [2024-11-19 10:43:29.651496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.063 [2024-11-19 10:43:29.651545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.063 [2024-11-19 10:43:29.651554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.063 [2024-11-19 10:43:29.651561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.063 [2024-11-19 10:43:29.651568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.063 [2024-11-19 10:43:29.653545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.063 [2024-11-19 10:43:29.653707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.063 [2024-11-19 10:43:29.653866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.063 [2024-11-19 10:43:29.653867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.324 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.324 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:51.324 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:51.324 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:51.324 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.324 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.324 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:51.324 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.324 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.324 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.324 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:51.324 "tick_rate": 2400000000, 00:15:51.324 "poll_groups": [ 00:15:51.324 { 00:15:51.324 "name": "nvmf_tgt_poll_group_000", 00:15:51.324 "admin_qpairs": 0, 00:15:51.324 "io_qpairs": 0, 00:15:51.324 "current_admin_qpairs": 0, 00:15:51.324 "current_io_qpairs": 0, 00:15:51.324 "pending_bdev_io": 0, 00:15:51.324 "completed_nvme_io": 0, 00:15:51.324 "transports": [] 00:15:51.324 }, 00:15:51.324 { 00:15:51.324 "name": "nvmf_tgt_poll_group_001", 00:15:51.324 "admin_qpairs": 0, 00:15:51.324 "io_qpairs": 0, 00:15:51.324 "current_admin_qpairs": 0, 00:15:51.324 "current_io_qpairs": 0, 00:15:51.324 "pending_bdev_io": 0, 00:15:51.324 "completed_nvme_io": 0, 00:15:51.324 "transports": [] 00:15:51.324 }, 00:15:51.324 { 00:15:51.324 "name": "nvmf_tgt_poll_group_002", 00:15:51.324 "admin_qpairs": 0, 00:15:51.324 "io_qpairs": 0, 00:15:51.324 "current_admin_qpairs": 0, 00:15:51.324 "current_io_qpairs": 0, 00:15:51.324 "pending_bdev_io": 0, 00:15:51.324 "completed_nvme_io": 0, 00:15:51.324 "transports": [] 00:15:51.324 }, 00:15:51.324 { 00:15:51.324 "name": "nvmf_tgt_poll_group_003", 00:15:51.324 "admin_qpairs": 0, 00:15:51.325 "io_qpairs": 0, 00:15:51.325 "current_admin_qpairs": 0, 00:15:51.325 "current_io_qpairs": 0, 00:15:51.325 "pending_bdev_io": 0, 00:15:51.325 "completed_nvme_io": 0, 00:15:51.325 "transports": [] 00:15:51.325 } 00:15:51.325 ] 00:15:51.325 }' 00:15:51.325 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:51.325 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:51.325 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:51.325 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:51.325 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:51.325 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:51.325 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:51.325 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:51.325 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.325 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.325 [2024-11-19 10:43:30.498449] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.325 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.325 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:51.325 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.325 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:51.586 "tick_rate": 2400000000, 00:15:51.586 "poll_groups": [ 00:15:51.586 { 00:15:51.586 "name": "nvmf_tgt_poll_group_000", 00:15:51.586 "admin_qpairs": 0, 00:15:51.586 "io_qpairs": 0, 00:15:51.586 "current_admin_qpairs": 0, 00:15:51.586 "current_io_qpairs": 0, 00:15:51.586 "pending_bdev_io": 0, 00:15:51.586 "completed_nvme_io": 0, 00:15:51.586 "transports": [ 00:15:51.586 { 00:15:51.586 "trtype": "TCP" 00:15:51.586 } 00:15:51.586 ] 00:15:51.586 }, 00:15:51.586 { 00:15:51.586 "name": "nvmf_tgt_poll_group_001", 00:15:51.586 "admin_qpairs": 0, 00:15:51.586 "io_qpairs": 0, 00:15:51.586 "current_admin_qpairs": 0, 00:15:51.586 "current_io_qpairs": 0, 00:15:51.586 "pending_bdev_io": 0, 00:15:51.586 "completed_nvme_io": 0, 00:15:51.586 "transports": [ 00:15:51.586 { 00:15:51.586 "trtype": "TCP" 00:15:51.586 } 00:15:51.586 ] 00:15:51.586 }, 00:15:51.586 { 00:15:51.586 "name": "nvmf_tgt_poll_group_002", 00:15:51.586 "admin_qpairs": 0, 00:15:51.586 "io_qpairs": 0, 00:15:51.586 "current_admin_qpairs": 0, 00:15:51.586 "current_io_qpairs": 0, 00:15:51.586 "pending_bdev_io": 0, 00:15:51.586 "completed_nvme_io": 0, 00:15:51.586 "transports": [ 00:15:51.586 { 00:15:51.586 "trtype": "TCP" 00:15:51.586 } 00:15:51.586 ] 00:15:51.586 }, 00:15:51.586 { 00:15:51.586 "name": "nvmf_tgt_poll_group_003", 00:15:51.586 "admin_qpairs": 0, 00:15:51.586 "io_qpairs": 0, 00:15:51.586 "current_admin_qpairs": 0, 00:15:51.586 "current_io_qpairs": 0, 00:15:51.586 "pending_bdev_io": 0, 00:15:51.586 "completed_nvme_io": 0, 00:15:51.586 "transports": [ 00:15:51.586 { 00:15:51.586 "trtype": "TCP" 00:15:51.586 } 00:15:51.586 ] 00:15:51.586 } 00:15:51.586 ] 00:15:51.586 }' 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.586 Malloc1 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.586 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.587 [2024-11-19 10:43:30.713284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:15:51.587 [2024-11-19 10:43:30.750288] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:15:51.587 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:51.587 could not add new controller: failed to write to nvme-fabrics device 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.587 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.848 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.848 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:53.231 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:53.232 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:53.232 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:53.232 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:53.232 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:55.144 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:55.144 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:55.144 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:55.144 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:55.144 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:55.144 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:55.144 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:55.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:55.405 [2024-11-19 10:43:34.478149] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:15:55.405 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:55.405 could not add new controller: failed to write to nvme-fabrics device 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.405 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:57.316 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:57.316 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:57.316 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:57.316 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:57.316 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:59.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.226 [2024-11-19 10:43:38.206639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.226 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:00.615 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:00.615 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:00.615 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:00.615 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:00.615 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:02.527 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:02.527 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:02.527 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:02.788 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:02.788 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:02.788 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:02.788 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:02.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.788 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:02.788 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:02.788 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:02.788 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:02.788 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:02.788 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:02.788 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:02.788 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.789 [2024-11-19 10:43:41.926676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.789 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:04.699 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:04.699 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:04.699 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:04.699 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:04.699 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:06.612 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:06.612 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:06.612 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:06.612 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:06.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.613 [2024-11-19 10:43:45.674125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.613 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:08.526 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:08.526 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:08.526 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:08.526 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:08.526 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:10.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.440 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:10.440 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:10.440 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.440 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.440 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.440 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:10.440 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.440 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.440 [2024-11-19 10:43:49.416910] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.440 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.440 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:10.440 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.440 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.440 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.440 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:10.440 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.440 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.440 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.440 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:11.822 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:11.822 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:11.822 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:11.822 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:11.822 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:14.364 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:14.364 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:14.364 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:14.364 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:14.364 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:14.364 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:14.364 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:14.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.364 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:14.364 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:14.364 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.365 [2024-11-19 10:43:53.141391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.365 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:15.746 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:15.746 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:15.746 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.746 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:15.746 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:17.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.658 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 [2024-11-19 10:43:56.864979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 [2024-11-19 10:43:56.937179] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 [2024-11-19 10:43:57.005387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 [2024-11-19 10:43:57.077599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.919 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.920 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.186 [2024-11-19 10:43:57.145815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:18.186 "tick_rate": 2400000000, 00:16:18.186 "poll_groups": [ 00:16:18.186 { 00:16:18.186 "name": "nvmf_tgt_poll_group_000", 00:16:18.186 "admin_qpairs": 0, 00:16:18.186 "io_qpairs": 224, 00:16:18.186 "current_admin_qpairs": 0, 00:16:18.186 "current_io_qpairs": 0, 00:16:18.186 "pending_bdev_io": 0, 00:16:18.186 "completed_nvme_io": 266, 00:16:18.186 "transports": [ 00:16:18.186 { 00:16:18.186 "trtype": "TCP" 00:16:18.186 } 00:16:18.186 ] 00:16:18.186 }, 00:16:18.186 { 00:16:18.186 "name": "nvmf_tgt_poll_group_001", 00:16:18.186 "admin_qpairs": 1, 00:16:18.186 "io_qpairs": 223, 00:16:18.186 "current_admin_qpairs": 0, 00:16:18.186 "current_io_qpairs": 0, 00:16:18.186 "pending_bdev_io": 0, 00:16:18.186 "completed_nvme_io": 230, 00:16:18.186 "transports": [ 00:16:18.186 { 00:16:18.186 "trtype": "TCP" 00:16:18.186 } 00:16:18.186 ] 00:16:18.186 }, 00:16:18.186 { 00:16:18.186 "name": "nvmf_tgt_poll_group_002", 00:16:18.186 "admin_qpairs": 6, 00:16:18.186 "io_qpairs": 218, 00:16:18.186 "current_admin_qpairs": 0, 00:16:18.186 "current_io_qpairs": 0, 00:16:18.186 "pending_bdev_io": 0, 00:16:18.186 "completed_nvme_io": 248, 00:16:18.186 "transports": [ 00:16:18.186 { 00:16:18.186 "trtype": "TCP" 00:16:18.186 } 00:16:18.186 ] 00:16:18.186 }, 00:16:18.186 { 00:16:18.186 "name": "nvmf_tgt_poll_group_003", 00:16:18.186 "admin_qpairs": 0, 00:16:18.186 "io_qpairs": 224, 00:16:18.186 "current_admin_qpairs": 0, 00:16:18.186 "current_io_qpairs": 0, 00:16:18.186 "pending_bdev_io": 0, 00:16:18.186 "completed_nvme_io": 495, 00:16:18.186 "transports": [ 00:16:18.186 { 00:16:18.186 "trtype": "TCP" 00:16:18.186 } 00:16:18.186 ] 00:16:18.186 } 00:16:18.186 ] 00:16:18.186 }' 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:18.186 rmmod nvme_tcp 00:16:18.186 rmmod nvme_fabrics 00:16:18.186 rmmod nvme_keyring 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 936912 ']' 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 936912 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 936912 ']' 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 936912 00:16:18.186 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:18.448 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.448 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 936912 00:16:18.448 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.448 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.448 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 936912' 00:16:18.448 killing process with pid 936912 00:16:18.448 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 936912 00:16:18.448 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 936912 00:16:18.448 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:18.448 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:18.448 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:18.448 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:18.448 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:18.448 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:18.448 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:18.448 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:18.448 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:18.448 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.448 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:18.448 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.991 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:20.991 00:16:20.991 real 0m37.990s 00:16:20.991 user 1m53.629s 00:16:20.991 sys 0m7.913s 00:16:20.991 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.991 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.991 ************************************ 00:16:20.991 END TEST nvmf_rpc 00:16:20.991 ************************************ 00:16:20.991 10:43:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:20.991 10:43:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:20.991 10:43:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.991 10:43:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:20.991 ************************************ 00:16:20.991 START TEST nvmf_invalid 00:16:20.991 ************************************ 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:20.992 * Looking for test storage... 00:16:20.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:20.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.992 --rc genhtml_branch_coverage=1 00:16:20.992 --rc genhtml_function_coverage=1 00:16:20.992 --rc genhtml_legend=1 00:16:20.992 --rc geninfo_all_blocks=1 00:16:20.992 --rc geninfo_unexecuted_blocks=1 00:16:20.992 00:16:20.992 ' 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:20.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.992 --rc genhtml_branch_coverage=1 00:16:20.992 --rc genhtml_function_coverage=1 00:16:20.992 --rc genhtml_legend=1 00:16:20.992 --rc geninfo_all_blocks=1 00:16:20.992 --rc geninfo_unexecuted_blocks=1 00:16:20.992 00:16:20.992 ' 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:20.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.992 --rc genhtml_branch_coverage=1 00:16:20.992 --rc genhtml_function_coverage=1 00:16:20.992 --rc genhtml_legend=1 00:16:20.992 --rc geninfo_all_blocks=1 00:16:20.992 --rc geninfo_unexecuted_blocks=1 00:16:20.992 00:16:20.992 ' 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:20.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.992 --rc genhtml_branch_coverage=1 00:16:20.992 --rc genhtml_function_coverage=1 00:16:20.992 --rc genhtml_legend=1 00:16:20.992 --rc geninfo_all_blocks=1 00:16:20.992 --rc geninfo_unexecuted_blocks=1 00:16:20.992 00:16:20.992 ' 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.992 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:20.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:20.993 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:29.137 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:29.137 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:29.137 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:29.137 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:29.137 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:29.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:29.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:16:29.138 00:16:29.138 --- 10.0.0.2 ping statistics --- 00:16:29.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.138 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:29.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:29.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:16:29.138 00:16:29.138 --- 10.0.0.1 ping statistics --- 00:16:29.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.138 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=946525 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 946525 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 946525 ']' 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.138 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:29.138 [2024-11-19 10:44:07.447182] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:16:29.138 [2024-11-19 10:44:07.447252] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.138 [2024-11-19 10:44:07.551492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:29.138 [2024-11-19 10:44:07.604333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.138 [2024-11-19 10:44:07.604384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.138 [2024-11-19 10:44:07.604393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.138 [2024-11-19 10:44:07.604400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.138 [2024-11-19 10:44:07.604406] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.138 [2024-11-19 10:44:07.606631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.138 [2024-11-19 10:44:07.606838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.138 [2024-11-19 10:44:07.606999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.138 [2024-11-19 10:44:07.606999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:29.138 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.138 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:16:29.138 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:29.138 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:29.138 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:29.138 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.138 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:29.138 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode14278 00:16:29.400 [2024-11-19 10:44:08.495848] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:29.400 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:29.400 { 00:16:29.400 "nqn": "nqn.2016-06.io.spdk:cnode14278", 00:16:29.400 "tgt_name": "foobar", 00:16:29.400 "method": "nvmf_create_subsystem", 00:16:29.400 "req_id": 1 00:16:29.400 } 00:16:29.400 Got JSON-RPC error response 00:16:29.400 response: 00:16:29.400 { 00:16:29.400 "code": -32603, 00:16:29.400 "message": "Unable to find target foobar" 00:16:29.400 }' 00:16:29.400 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:29.400 { 00:16:29.400 "nqn": "nqn.2016-06.io.spdk:cnode14278", 00:16:29.400 "tgt_name": "foobar", 00:16:29.400 "method": "nvmf_create_subsystem", 00:16:29.400 "req_id": 1 00:16:29.400 } 00:16:29.400 Got JSON-RPC error response 00:16:29.400 response: 00:16:29.400 { 00:16:29.400 "code": -32603, 00:16:29.400 "message": "Unable to find target foobar" 00:16:29.400 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:29.400 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:29.400 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17009 00:16:29.661 [2024-11-19 10:44:08.700715] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17009: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:29.661 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:29.661 { 00:16:29.661 "nqn": "nqn.2016-06.io.spdk:cnode17009", 00:16:29.661 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:29.661 "method": "nvmf_create_subsystem", 00:16:29.661 "req_id": 1 00:16:29.661 } 00:16:29.661 Got JSON-RPC error response 00:16:29.661 response: 00:16:29.661 { 00:16:29.661 "code": -32602, 00:16:29.661 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:29.661 }' 00:16:29.661 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:29.661 { 00:16:29.661 "nqn": "nqn.2016-06.io.spdk:cnode17009", 00:16:29.661 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:29.661 "method": "nvmf_create_subsystem", 00:16:29.661 "req_id": 1 00:16:29.661 } 00:16:29.661 Got JSON-RPC error response 00:16:29.661 response: 00:16:29.661 { 00:16:29.661 "code": -32602, 00:16:29.661 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:29.661 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:29.661 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:29.661 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14096 00:16:29.922 [2024-11-19 10:44:08.909427] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14096: invalid model number 'SPDK_Controller' 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:29.922 { 00:16:29.922 "nqn": "nqn.2016-06.io.spdk:cnode14096", 00:16:29.922 "model_number": "SPDK_Controller\u001f", 00:16:29.922 "method": "nvmf_create_subsystem", 00:16:29.922 "req_id": 1 00:16:29.922 } 00:16:29.922 Got JSON-RPC error response 00:16:29.922 response: 00:16:29.922 { 00:16:29.922 "code": -32602, 00:16:29.922 "message": "Invalid MN SPDK_Controller\u001f" 00:16:29.922 }' 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:29.922 { 00:16:29.922 "nqn": "nqn.2016-06.io.spdk:cnode14096", 00:16:29.922 "model_number": "SPDK_Controller\u001f", 00:16:29.922 "method": "nvmf_create_subsystem", 00:16:29.922 "req_id": 1 00:16:29.922 } 00:16:29.922 Got JSON-RPC error response 00:16:29.922 response: 00:16:29.922 { 00:16:29.922 "code": -32602, 00:16:29.922 "message": "Invalid MN SPDK_Controller\u001f" 00:16:29.922 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:29.922 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:29.922 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:29.922 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.922 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.922 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:16:29.922 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:29.922 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:16:29.922 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.922 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.922 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:29.922 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:29.923 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ^ == \- ]] 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '^3?d[*ZL:gK%)&$j@Fcq,' 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '^3?d[*ZL:gK%)&$j@Fcq,' nqn.2016-06.io.spdk:cnode9311 00:16:30.185 [2024-11-19 10:44:09.294905] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9311: invalid serial number '^3?d[*ZL:gK%)&$j@Fcq,' 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:30.185 { 00:16:30.185 "nqn": "nqn.2016-06.io.spdk:cnode9311", 00:16:30.185 "serial_number": "^3?d[*ZL:gK%)&$j@Fcq,", 00:16:30.185 "method": "nvmf_create_subsystem", 00:16:30.185 "req_id": 1 00:16:30.185 } 00:16:30.185 Got JSON-RPC error response 00:16:30.185 response: 00:16:30.185 { 00:16:30.185 "code": -32602, 00:16:30.185 "message": "Invalid SN ^3?d[*ZL:gK%)&$j@Fcq," 00:16:30.185 }' 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:30.185 { 00:16:30.185 "nqn": "nqn.2016-06.io.spdk:cnode9311", 00:16:30.185 "serial_number": "^3?d[*ZL:gK%)&$j@Fcq,", 00:16:30.185 "method": "nvmf_create_subsystem", 00:16:30.185 "req_id": 1 00:16:30.185 } 00:16:30.185 Got JSON-RPC error response 00:16:30.185 response: 00:16:30.185 { 00:16:30.185 "code": -32602, 00:16:30.185 "message": "Invalid SN ^3?d[*ZL:gK%)&$j@Fcq," 00:16:30.185 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.185 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.447 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:30.447 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:30.447 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:30.447 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.447 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.447 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:30.447 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:30.447 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:30.447 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.447 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.447 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:30.447 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:30.447 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:30.447 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.447 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.447 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:30.447 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:30.447 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.448 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.449 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ Y == \- ]] 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Y}.]C&]=6LC@QUbp1j~2{<}_u8*4t_L6)30}n/zIR' 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Y}.]C&]=6LC@QUbp1j~2{<}_u8*4t_L6)30}n/zIR' nqn.2016-06.io.spdk:cnode7714 00:16:30.710 [2024-11-19 10:44:09.824794] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7714: invalid model number 'Y}.]C&]=6LC@QUbp1j~2{<}_u8*4t_L6)30}n/zIR' 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:30.710 { 00:16:30.710 "nqn": "nqn.2016-06.io.spdk:cnode7714", 00:16:30.710 "model_number": "Y}.]C&]=6LC@QUbp1j~2{<}_u8*4t_L6)30}n/zIR", 00:16:30.710 "method": "nvmf_create_subsystem", 00:16:30.710 "req_id": 1 00:16:30.710 } 00:16:30.710 Got JSON-RPC error response 00:16:30.710 response: 00:16:30.710 { 00:16:30.710 "code": -32602, 00:16:30.710 "message": "Invalid MN Y}.]C&]=6LC@QUbp1j~2{<}_u8*4t_L6)30}n/zIR" 00:16:30.710 }' 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:30.710 { 00:16:30.710 "nqn": "nqn.2016-06.io.spdk:cnode7714", 00:16:30.710 "model_number": "Y}.]C&]=6LC@QUbp1j~2{<}_u8*4t_L6)30}n/zIR", 00:16:30.710 "method": "nvmf_create_subsystem", 00:16:30.710 "req_id": 1 00:16:30.710 } 00:16:30.710 Got JSON-RPC error response 00:16:30.710 response: 00:16:30.710 { 00:16:30.710 "code": -32602, 00:16:30.710 "message": "Invalid MN Y}.]C&]=6LC@QUbp1j~2{<}_u8*4t_L6)30}n/zIR" 00:16:30.710 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:30.710 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:30.971 [2024-11-19 10:44:10.013861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.971 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:31.232 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:31.232 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:31.232 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:31.232 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:31.232 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:31.232 [2024-11-19 10:44:10.399047] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:31.493 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:31.493 { 00:16:31.493 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:31.493 "listen_address": { 00:16:31.493 "trtype": "tcp", 00:16:31.493 "traddr": "", 00:16:31.493 "trsvcid": "4421" 00:16:31.493 }, 00:16:31.493 "method": "nvmf_subsystem_remove_listener", 00:16:31.493 "req_id": 1 00:16:31.493 } 00:16:31.493 Got JSON-RPC error response 00:16:31.493 response: 00:16:31.493 { 00:16:31.493 "code": -32602, 00:16:31.493 "message": "Invalid parameters" 00:16:31.493 }' 00:16:31.493 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:31.493 { 00:16:31.493 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:31.493 "listen_address": { 00:16:31.493 "trtype": "tcp", 00:16:31.493 "traddr": "", 00:16:31.493 "trsvcid": "4421" 00:16:31.493 }, 00:16:31.493 "method": "nvmf_subsystem_remove_listener", 00:16:31.493 "req_id": 1 00:16:31.493 } 00:16:31.493 Got JSON-RPC error response 00:16:31.493 response: 00:16:31.493 { 00:16:31.493 "code": -32602, 00:16:31.493 "message": "Invalid parameters" 00:16:31.493 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:31.493 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13829 -i 0 00:16:31.493 [2024-11-19 10:44:10.587592] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13829: invalid cntlid range [0-65519] 00:16:31.493 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:31.493 { 00:16:31.493 "nqn": "nqn.2016-06.io.spdk:cnode13829", 00:16:31.493 "min_cntlid": 0, 00:16:31.493 "method": "nvmf_create_subsystem", 00:16:31.493 "req_id": 1 00:16:31.493 } 00:16:31.493 Got JSON-RPC error response 00:16:31.493 response: 00:16:31.493 { 00:16:31.493 "code": -32602, 00:16:31.493 "message": "Invalid cntlid range [0-65519]" 00:16:31.493 }' 00:16:31.493 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:31.493 { 00:16:31.493 "nqn": "nqn.2016-06.io.spdk:cnode13829", 00:16:31.493 "min_cntlid": 0, 00:16:31.493 "method": "nvmf_create_subsystem", 00:16:31.493 "req_id": 1 00:16:31.493 } 00:16:31.493 Got JSON-RPC error response 00:16:31.493 response: 00:16:31.493 { 00:16:31.493 "code": -32602, 00:16:31.493 "message": "Invalid cntlid range [0-65519]" 00:16:31.493 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:31.493 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13542 -i 65520 00:16:31.754 [2024-11-19 10:44:10.772212] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13542: invalid cntlid range [65520-65519] 00:16:31.754 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:31.754 { 00:16:31.754 "nqn": "nqn.2016-06.io.spdk:cnode13542", 00:16:31.754 "min_cntlid": 65520, 00:16:31.754 "method": "nvmf_create_subsystem", 00:16:31.754 "req_id": 1 00:16:31.754 } 00:16:31.754 Got JSON-RPC error response 00:16:31.754 response: 00:16:31.754 { 00:16:31.754 "code": -32602, 00:16:31.755 "message": "Invalid cntlid range [65520-65519]" 00:16:31.755 }' 00:16:31.755 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:31.755 { 00:16:31.755 "nqn": "nqn.2016-06.io.spdk:cnode13542", 00:16:31.755 "min_cntlid": 65520, 00:16:31.755 "method": "nvmf_create_subsystem", 00:16:31.755 "req_id": 1 00:16:31.755 } 00:16:31.755 Got JSON-RPC error response 00:16:31.755 response: 00:16:31.755 { 00:16:31.755 "code": -32602, 00:16:31.755 "message": "Invalid cntlid range [65520-65519]" 00:16:31.755 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:31.755 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16376 -I 0 00:16:32.015 [2024-11-19 10:44:10.960751] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16376: invalid cntlid range [1-0] 00:16:32.015 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:32.015 { 00:16:32.015 "nqn": "nqn.2016-06.io.spdk:cnode16376", 00:16:32.015 "max_cntlid": 0, 00:16:32.015 "method": "nvmf_create_subsystem", 00:16:32.015 "req_id": 1 00:16:32.015 } 00:16:32.015 Got JSON-RPC error response 00:16:32.015 response: 00:16:32.015 { 00:16:32.015 "code": -32602, 00:16:32.015 "message": "Invalid cntlid range [1-0]" 00:16:32.015 }' 00:16:32.015 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:32.015 { 00:16:32.015 "nqn": "nqn.2016-06.io.spdk:cnode16376", 00:16:32.015 "max_cntlid": 0, 00:16:32.015 "method": "nvmf_create_subsystem", 00:16:32.015 "req_id": 1 00:16:32.015 } 00:16:32.015 Got JSON-RPC error response 00:16:32.015 response: 00:16:32.015 { 00:16:32.015 "code": -32602, 00:16:32.015 "message": "Invalid cntlid range [1-0]" 00:16:32.015 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:32.015 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21505 -I 65520 00:16:32.015 [2024-11-19 10:44:11.149322] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21505: invalid cntlid range [1-65520] 00:16:32.016 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:32.016 { 00:16:32.016 "nqn": "nqn.2016-06.io.spdk:cnode21505", 00:16:32.016 "max_cntlid": 65520, 00:16:32.016 "method": "nvmf_create_subsystem", 00:16:32.016 "req_id": 1 00:16:32.016 } 00:16:32.016 Got JSON-RPC error response 00:16:32.016 response: 00:16:32.016 { 00:16:32.016 "code": -32602, 00:16:32.016 "message": "Invalid cntlid range [1-65520]" 00:16:32.016 }' 00:16:32.016 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:32.016 { 00:16:32.016 "nqn": "nqn.2016-06.io.spdk:cnode21505", 00:16:32.016 "max_cntlid": 65520, 00:16:32.016 "method": "nvmf_create_subsystem", 00:16:32.016 "req_id": 1 00:16:32.016 } 00:16:32.016 Got JSON-RPC error response 00:16:32.016 response: 00:16:32.016 { 00:16:32.016 "code": -32602, 00:16:32.016 "message": "Invalid cntlid range [1-65520]" 00:16:32.016 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:32.016 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31206 -i 6 -I 5 00:16:32.276 [2024-11-19 10:44:11.337934] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31206: invalid cntlid range [6-5] 00:16:32.276 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:32.276 { 00:16:32.276 "nqn": "nqn.2016-06.io.spdk:cnode31206", 00:16:32.276 "min_cntlid": 6, 00:16:32.276 "max_cntlid": 5, 00:16:32.276 "method": "nvmf_create_subsystem", 00:16:32.276 "req_id": 1 00:16:32.276 } 00:16:32.276 Got JSON-RPC error response 00:16:32.276 response: 00:16:32.276 { 00:16:32.276 "code": -32602, 00:16:32.276 "message": "Invalid cntlid range [6-5]" 00:16:32.276 }' 00:16:32.276 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:32.276 { 00:16:32.276 "nqn": "nqn.2016-06.io.spdk:cnode31206", 00:16:32.276 "min_cntlid": 6, 00:16:32.276 "max_cntlid": 5, 00:16:32.276 "method": "nvmf_create_subsystem", 00:16:32.276 "req_id": 1 00:16:32.276 } 00:16:32.276 Got JSON-RPC error response 00:16:32.276 response: 00:16:32.276 { 00:16:32.276 "code": -32602, 00:16:32.276 "message": "Invalid cntlid range [6-5]" 00:16:32.276 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:32.276 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:32.537 { 00:16:32.537 "name": "foobar", 00:16:32.537 "method": "nvmf_delete_target", 00:16:32.537 "req_id": 1 00:16:32.537 } 00:16:32.537 Got JSON-RPC error response 00:16:32.537 response: 00:16:32.537 { 00:16:32.537 "code": -32602, 00:16:32.537 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:32.537 }' 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:32.537 { 00:16:32.537 "name": "foobar", 00:16:32.537 "method": "nvmf_delete_target", 00:16:32.537 "req_id": 1 00:16:32.537 } 00:16:32.537 Got JSON-RPC error response 00:16:32.537 response: 00:16:32.537 { 00:16:32.537 "code": -32602, 00:16:32.537 "message": "The specified target doesn't exist, cannot delete it." 00:16:32.537 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:32.537 rmmod nvme_tcp 00:16:32.537 rmmod nvme_fabrics 00:16:32.537 rmmod nvme_keyring 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 946525 ']' 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 946525 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 946525 ']' 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 946525 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 946525 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 946525' 00:16:32.537 killing process with pid 946525 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 946525 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 946525 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:16:32.537 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:16:32.798 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:32.798 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:16:32.798 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:32.798 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:32.798 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.798 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:32.798 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.712 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:34.712 00:16:34.712 real 0m14.090s 00:16:34.712 user 0m21.069s 00:16:34.712 sys 0m6.708s 00:16:34.712 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.712 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:34.712 ************************************ 00:16:34.712 END TEST nvmf_invalid 00:16:34.712 ************************************ 00:16:34.712 10:44:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:34.712 10:44:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:34.712 10:44:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.712 10:44:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:34.712 ************************************ 00:16:34.712 START TEST nvmf_connect_stress 00:16:34.712 ************************************ 00:16:34.712 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:34.974 * Looking for test storage... 00:16:34.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.974 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:34.974 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:16:34.974 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:34.974 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:34.974 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:34.974 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:34.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.975 --rc genhtml_branch_coverage=1 00:16:34.975 --rc genhtml_function_coverage=1 00:16:34.975 --rc genhtml_legend=1 00:16:34.975 --rc geninfo_all_blocks=1 00:16:34.975 --rc geninfo_unexecuted_blocks=1 00:16:34.975 00:16:34.975 ' 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:34.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.975 --rc genhtml_branch_coverage=1 00:16:34.975 --rc genhtml_function_coverage=1 00:16:34.975 --rc genhtml_legend=1 00:16:34.975 --rc geninfo_all_blocks=1 00:16:34.975 --rc geninfo_unexecuted_blocks=1 00:16:34.975 00:16:34.975 ' 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:34.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.975 --rc genhtml_branch_coverage=1 00:16:34.975 --rc genhtml_function_coverage=1 00:16:34.975 --rc genhtml_legend=1 00:16:34.975 --rc geninfo_all_blocks=1 00:16:34.975 --rc geninfo_unexecuted_blocks=1 00:16:34.975 00:16:34.975 ' 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:34.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.975 --rc genhtml_branch_coverage=1 00:16:34.975 --rc genhtml_function_coverage=1 00:16:34.975 --rc genhtml_legend=1 00:16:34.975 --rc geninfo_all_blocks=1 00:16:34.975 --rc geninfo_unexecuted_blocks=1 00:16:34.975 00:16:34.975 ' 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:34.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:34.975 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:34.976 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.976 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:34.976 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.976 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:34.976 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:34.976 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:34.976 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:43.114 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:43.115 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:43.115 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:43.115 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:43.115 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:43.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:16:43.115 00:16:43.115 --- 10.0.0.2 ping statistics --- 00:16:43.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.115 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:43.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:16:43.115 00:16:43.115 --- 10.0.0.1 ping statistics --- 00:16:43.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.115 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=951743 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 951743 00:16:43.115 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:43.116 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 951743 ']' 00:16:43.116 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.116 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.116 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.116 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.116 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.116 [2024-11-19 10:44:21.728315] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:16:43.116 [2024-11-19 10:44:21.728378] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.116 [2024-11-19 10:44:21.829516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:43.116 [2024-11-19 10:44:21.881397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.116 [2024-11-19 10:44:21.881448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.116 [2024-11-19 10:44:21.881457] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.116 [2024-11-19 10:44:21.881464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.116 [2024-11-19 10:44:21.881470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.116 [2024-11-19 10:44:21.883527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.116 [2024-11-19 10:44:21.883687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.116 [2024-11-19 10:44:21.883688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:43.376 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.376 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:16:43.376 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:43.376 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:43.376 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.638 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.638 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:43.638 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.638 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.638 [2024-11-19 10:44:22.605126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:43.638 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.638 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:43.638 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.638 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.638 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.638 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:43.638 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.638 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.638 [2024-11-19 10:44:22.630792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.638 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.638 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:43.638 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.638 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.638 NULL1 00:16:43.638 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.638 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=951971 00:16:43.638 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.639 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.900 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.900 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:43.900 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.900 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.900 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.472 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.472 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:44.472 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.472 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.472 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.732 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.732 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:44.733 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.733 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.733 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.993 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.993 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:44.993 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.993 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.993 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.254 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.254 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:45.254 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.254 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.254 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.930 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.930 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:45.930 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.930 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.930 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.930 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.930 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:45.930 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.930 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.930 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.229 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.229 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:46.229 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.229 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.229 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.526 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.526 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:46.526 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.526 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.526 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.167 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.167 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:47.167 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.167 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.167 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.167 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.167 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:47.167 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.167 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.167 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.737 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.737 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:47.737 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.737 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.737 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.996 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.996 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:47.996 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.996 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.996 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.256 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.256 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:48.256 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.256 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.256 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.517 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.517 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:48.517 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.517 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.517 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.776 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.776 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:48.776 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.776 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.776 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.346 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.346 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:49.346 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.346 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.346 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.606 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.606 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:49.606 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.606 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.606 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.866 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.866 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:49.866 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.866 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.866 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.136 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.137 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:50.137 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.137 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.137 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.398 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.398 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:50.398 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.398 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.398 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.967 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.967 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:50.967 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.967 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.967 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.229 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.229 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:51.229 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.229 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.229 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.488 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.488 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:51.488 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.489 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.489 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.748 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.748 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:51.748 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.748 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.748 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.318 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.318 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:52.318 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.318 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.318 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.577 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.577 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:52.577 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.577 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.577 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.837 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.837 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:52.837 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.837 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.837 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.097 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.097 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:53.097 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.097 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.097 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.357 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.357 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:53.357 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.357 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.357 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.926 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.926 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:53.926 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.926 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.926 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.926 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:54.186 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.186 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 951971 00:16:54.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (951971) - No such process 00:16:54.186 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 951971 00:16:54.186 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:54.186 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:54.186 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:54.186 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:54.186 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:54.186 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:54.186 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:54.187 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:54.187 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:54.187 rmmod nvme_tcp 00:16:54.187 rmmod nvme_fabrics 00:16:54.187 rmmod nvme_keyring 00:16:54.187 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:54.187 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:54.187 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:54.187 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 951743 ']' 00:16:54.187 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 951743 00:16:54.187 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 951743 ']' 00:16:54.187 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 951743 00:16:54.187 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:16:54.187 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.187 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 951743 00:16:54.187 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:54.187 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:54.187 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 951743' 00:16:54.187 killing process with pid 951743 00:16:54.187 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 951743 00:16:54.187 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 951743 00:16:54.446 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:54.446 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:54.446 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:54.446 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:54.446 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:54.446 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:16:54.446 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:16:54.446 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:54.446 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:54.446 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.446 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:54.446 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.355 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:56.355 00:16:56.355 real 0m21.610s 00:16:56.355 user 0m43.151s 00:16:56.355 sys 0m9.466s 00:16:56.355 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.355 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.355 ************************************ 00:16:56.355 END TEST nvmf_connect_stress 00:16:56.355 ************************************ 00:16:56.355 10:44:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:56.355 10:44:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:56.355 10:44:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.356 10:44:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:56.617 ************************************ 00:16:56.617 START TEST nvmf_fused_ordering 00:16:56.617 ************************************ 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:56.617 * Looking for test storage... 00:16:56.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:56.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.617 --rc genhtml_branch_coverage=1 00:16:56.617 --rc genhtml_function_coverage=1 00:16:56.617 --rc genhtml_legend=1 00:16:56.617 --rc geninfo_all_blocks=1 00:16:56.617 --rc geninfo_unexecuted_blocks=1 00:16:56.617 00:16:56.617 ' 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:56.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.617 --rc genhtml_branch_coverage=1 00:16:56.617 --rc genhtml_function_coverage=1 00:16:56.617 --rc genhtml_legend=1 00:16:56.617 --rc geninfo_all_blocks=1 00:16:56.617 --rc geninfo_unexecuted_blocks=1 00:16:56.617 00:16:56.617 ' 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:56.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.617 --rc genhtml_branch_coverage=1 00:16:56.617 --rc genhtml_function_coverage=1 00:16:56.617 --rc genhtml_legend=1 00:16:56.617 --rc geninfo_all_blocks=1 00:16:56.617 --rc geninfo_unexecuted_blocks=1 00:16:56.617 00:16:56.617 ' 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:56.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.617 --rc genhtml_branch_coverage=1 00:16:56.617 --rc genhtml_function_coverage=1 00:16:56.617 --rc genhtml_legend=1 00:16:56.617 --rc geninfo_all_blocks=1 00:16:56.617 --rc geninfo_unexecuted_blocks=1 00:16:56.617 00:16:56.617 ' 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.617 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.879 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.879 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.879 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.879 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.879 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.879 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.879 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.879 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:16:56.879 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.879 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:56.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:16:56.880 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:05.028 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:05.028 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:05.028 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:05.028 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:05.028 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:05.029 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:05.029 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:05.029 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.029 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.029 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:05.029 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:05.029 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:05.029 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:05.029 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:05.029 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:05.029 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:05.029 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.029 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:05.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:17:05.029 00:17:05.029 --- 10.0.0.2 ping statistics --- 00:17:05.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.029 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:05.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:17:05.029 00:17:05.029 --- 10.0.0.1 ping statistics --- 00:17:05.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.029 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=958338 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 958338 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 958338 ']' 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.029 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.029 [2024-11-19 10:44:43.391061] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:17:05.029 [2024-11-19 10:44:43.391125] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.029 [2024-11-19 10:44:43.492053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.029 [2024-11-19 10:44:43.544774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.029 [2024-11-19 10:44:43.544829] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.029 [2024-11-19 10:44:43.544837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.029 [2024-11-19 10:44:43.544844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.029 [2024-11-19 10:44:43.544851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.029 [2024-11-19 10:44:43.545637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.290 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.290 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:05.290 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:05.290 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:05.290 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.290 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.291 [2024-11-19 10:44:44.274053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.291 [2024-11-19 10:44:44.290348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.291 NULL1 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.291 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:05.291 [2024-11-19 10:44:44.347650] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:17:05.291 [2024-11-19 10:44:44.347716] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid958392 ] 00:17:05.862 Attached to nqn.2016-06.io.spdk:cnode1 00:17:05.862 Namespace ID: 1 size: 1GB 00:17:05.862 fused_ordering(0) 00:17:05.862 fused_ordering(1) 00:17:05.862 fused_ordering(2) 00:17:05.862 fused_ordering(3) 00:17:05.862 fused_ordering(4) 00:17:05.862 fused_ordering(5) 00:17:05.862 fused_ordering(6) 00:17:05.862 fused_ordering(7) 00:17:05.862 fused_ordering(8) 00:17:05.862 fused_ordering(9) 00:17:05.862 fused_ordering(10) 00:17:05.862 fused_ordering(11) 00:17:05.862 fused_ordering(12) 00:17:05.862 fused_ordering(13) 00:17:05.862 fused_ordering(14) 00:17:05.862 fused_ordering(15) 00:17:05.862 fused_ordering(16) 00:17:05.862 fused_ordering(17) 00:17:05.862 fused_ordering(18) 00:17:05.862 fused_ordering(19) 00:17:05.862 fused_ordering(20) 00:17:05.862 fused_ordering(21) 00:17:05.862 fused_ordering(22) 00:17:05.862 fused_ordering(23) 00:17:05.862 fused_ordering(24) 00:17:05.862 fused_ordering(25) 00:17:05.862 fused_ordering(26) 00:17:05.862 fused_ordering(27) 00:17:05.862 fused_ordering(28) 00:17:05.862 fused_ordering(29) 00:17:05.862 fused_ordering(30) 00:17:05.862 fused_ordering(31) 00:17:05.862 fused_ordering(32) 00:17:05.862 fused_ordering(33) 00:17:05.862 fused_ordering(34) 00:17:05.862 fused_ordering(35) 00:17:05.862 fused_ordering(36) 00:17:05.862 fused_ordering(37) 00:17:05.862 fused_ordering(38) 00:17:05.862 fused_ordering(39) 00:17:05.862 fused_ordering(40) 00:17:05.862 fused_ordering(41) 00:17:05.862 fused_ordering(42) 00:17:05.862 fused_ordering(43) 00:17:05.862 fused_ordering(44) 00:17:05.862 fused_ordering(45) 00:17:05.862 fused_ordering(46) 00:17:05.862 fused_ordering(47) 00:17:05.862 fused_ordering(48) 00:17:05.862 fused_ordering(49) 00:17:05.862 fused_ordering(50) 00:17:05.862 fused_ordering(51) 00:17:05.862 fused_ordering(52) 00:17:05.862 fused_ordering(53) 00:17:05.862 fused_ordering(54) 00:17:05.862 fused_ordering(55) 00:17:05.862 fused_ordering(56) 00:17:05.862 fused_ordering(57) 00:17:05.862 fused_ordering(58) 00:17:05.862 fused_ordering(59) 00:17:05.862 fused_ordering(60) 00:17:05.862 fused_ordering(61) 00:17:05.862 fused_ordering(62) 00:17:05.862 fused_ordering(63) 00:17:05.862 fused_ordering(64) 00:17:05.862 fused_ordering(65) 00:17:05.862 fused_ordering(66) 00:17:05.862 fused_ordering(67) 00:17:05.862 fused_ordering(68) 00:17:05.862 fused_ordering(69) 00:17:05.862 fused_ordering(70) 00:17:05.862 fused_ordering(71) 00:17:05.862 fused_ordering(72) 00:17:05.862 fused_ordering(73) 00:17:05.862 fused_ordering(74) 00:17:05.862 fused_ordering(75) 00:17:05.862 fused_ordering(76) 00:17:05.863 fused_ordering(77) 00:17:05.863 fused_ordering(78) 00:17:05.863 fused_ordering(79) 00:17:05.863 fused_ordering(80) 00:17:05.863 fused_ordering(81) 00:17:05.863 fused_ordering(82) 00:17:05.863 fused_ordering(83) 00:17:05.863 fused_ordering(84) 00:17:05.863 fused_ordering(85) 00:17:05.863 fused_ordering(86) 00:17:05.863 fused_ordering(87) 00:17:05.863 fused_ordering(88) 00:17:05.863 fused_ordering(89) 00:17:05.863 fused_ordering(90) 00:17:05.863 fused_ordering(91) 00:17:05.863 fused_ordering(92) 00:17:05.863 fused_ordering(93) 00:17:05.863 fused_ordering(94) 00:17:05.863 fused_ordering(95) 00:17:05.863 fused_ordering(96) 00:17:05.863 fused_ordering(97) 00:17:05.863 fused_ordering(98) 00:17:05.863 fused_ordering(99) 00:17:05.863 fused_ordering(100) 00:17:05.863 fused_ordering(101) 00:17:05.863 fused_ordering(102) 00:17:05.863 fused_ordering(103) 00:17:05.863 fused_ordering(104) 00:17:05.863 fused_ordering(105) 00:17:05.863 fused_ordering(106) 00:17:05.863 fused_ordering(107) 00:17:05.863 fused_ordering(108) 00:17:05.863 fused_ordering(109) 00:17:05.863 fused_ordering(110) 00:17:05.863 fused_ordering(111) 00:17:05.863 fused_ordering(112) 00:17:05.863 fused_ordering(113) 00:17:05.863 fused_ordering(114) 00:17:05.863 fused_ordering(115) 00:17:05.863 fused_ordering(116) 00:17:05.863 fused_ordering(117) 00:17:05.863 fused_ordering(118) 00:17:05.863 fused_ordering(119) 00:17:05.863 fused_ordering(120) 00:17:05.863 fused_ordering(121) 00:17:05.863 fused_ordering(122) 00:17:05.863 fused_ordering(123) 00:17:05.863 fused_ordering(124) 00:17:05.863 fused_ordering(125) 00:17:05.863 fused_ordering(126) 00:17:05.863 fused_ordering(127) 00:17:05.863 fused_ordering(128) 00:17:05.863 fused_ordering(129) 00:17:05.863 fused_ordering(130) 00:17:05.863 fused_ordering(131) 00:17:05.863 fused_ordering(132) 00:17:05.863 fused_ordering(133) 00:17:05.863 fused_ordering(134) 00:17:05.863 fused_ordering(135) 00:17:05.863 fused_ordering(136) 00:17:05.863 fused_ordering(137) 00:17:05.863 fused_ordering(138) 00:17:05.863 fused_ordering(139) 00:17:05.863 fused_ordering(140) 00:17:05.863 fused_ordering(141) 00:17:05.863 fused_ordering(142) 00:17:05.863 fused_ordering(143) 00:17:05.863 fused_ordering(144) 00:17:05.863 fused_ordering(145) 00:17:05.863 fused_ordering(146) 00:17:05.863 fused_ordering(147) 00:17:05.863 fused_ordering(148) 00:17:05.863 fused_ordering(149) 00:17:05.863 fused_ordering(150) 00:17:05.863 fused_ordering(151) 00:17:05.863 fused_ordering(152) 00:17:05.863 fused_ordering(153) 00:17:05.863 fused_ordering(154) 00:17:05.863 fused_ordering(155) 00:17:05.863 fused_ordering(156) 00:17:05.863 fused_ordering(157) 00:17:05.863 fused_ordering(158) 00:17:05.863 fused_ordering(159) 00:17:05.863 fused_ordering(160) 00:17:05.863 fused_ordering(161) 00:17:05.863 fused_ordering(162) 00:17:05.863 fused_ordering(163) 00:17:05.863 fused_ordering(164) 00:17:05.863 fused_ordering(165) 00:17:05.863 fused_ordering(166) 00:17:05.863 fused_ordering(167) 00:17:05.863 fused_ordering(168) 00:17:05.863 fused_ordering(169) 00:17:05.863 fused_ordering(170) 00:17:05.863 fused_ordering(171) 00:17:05.863 fused_ordering(172) 00:17:05.863 fused_ordering(173) 00:17:05.863 fused_ordering(174) 00:17:05.863 fused_ordering(175) 00:17:05.863 fused_ordering(176) 00:17:05.863 fused_ordering(177) 00:17:05.863 fused_ordering(178) 00:17:05.863 fused_ordering(179) 00:17:05.863 fused_ordering(180) 00:17:05.863 fused_ordering(181) 00:17:05.863 fused_ordering(182) 00:17:05.863 fused_ordering(183) 00:17:05.863 fused_ordering(184) 00:17:05.863 fused_ordering(185) 00:17:05.863 fused_ordering(186) 00:17:05.863 fused_ordering(187) 00:17:05.863 fused_ordering(188) 00:17:05.863 fused_ordering(189) 00:17:05.863 fused_ordering(190) 00:17:05.863 fused_ordering(191) 00:17:05.863 fused_ordering(192) 00:17:05.863 fused_ordering(193) 00:17:05.863 fused_ordering(194) 00:17:05.863 fused_ordering(195) 00:17:05.863 fused_ordering(196) 00:17:05.863 fused_ordering(197) 00:17:05.863 fused_ordering(198) 00:17:05.863 fused_ordering(199) 00:17:05.863 fused_ordering(200) 00:17:05.863 fused_ordering(201) 00:17:05.863 fused_ordering(202) 00:17:05.863 fused_ordering(203) 00:17:05.863 fused_ordering(204) 00:17:05.863 fused_ordering(205) 00:17:06.124 fused_ordering(206) 00:17:06.124 fused_ordering(207) 00:17:06.124 fused_ordering(208) 00:17:06.124 fused_ordering(209) 00:17:06.124 fused_ordering(210) 00:17:06.124 fused_ordering(211) 00:17:06.124 fused_ordering(212) 00:17:06.124 fused_ordering(213) 00:17:06.124 fused_ordering(214) 00:17:06.124 fused_ordering(215) 00:17:06.124 fused_ordering(216) 00:17:06.124 fused_ordering(217) 00:17:06.124 fused_ordering(218) 00:17:06.124 fused_ordering(219) 00:17:06.124 fused_ordering(220) 00:17:06.124 fused_ordering(221) 00:17:06.124 fused_ordering(222) 00:17:06.124 fused_ordering(223) 00:17:06.124 fused_ordering(224) 00:17:06.124 fused_ordering(225) 00:17:06.124 fused_ordering(226) 00:17:06.124 fused_ordering(227) 00:17:06.124 fused_ordering(228) 00:17:06.124 fused_ordering(229) 00:17:06.124 fused_ordering(230) 00:17:06.124 fused_ordering(231) 00:17:06.124 fused_ordering(232) 00:17:06.124 fused_ordering(233) 00:17:06.124 fused_ordering(234) 00:17:06.124 fused_ordering(235) 00:17:06.124 fused_ordering(236) 00:17:06.124 fused_ordering(237) 00:17:06.124 fused_ordering(238) 00:17:06.124 fused_ordering(239) 00:17:06.124 fused_ordering(240) 00:17:06.124 fused_ordering(241) 00:17:06.124 fused_ordering(242) 00:17:06.124 fused_ordering(243) 00:17:06.124 fused_ordering(244) 00:17:06.124 fused_ordering(245) 00:17:06.124 fused_ordering(246) 00:17:06.124 fused_ordering(247) 00:17:06.124 fused_ordering(248) 00:17:06.124 fused_ordering(249) 00:17:06.124 fused_ordering(250) 00:17:06.124 fused_ordering(251) 00:17:06.124 fused_ordering(252) 00:17:06.124 fused_ordering(253) 00:17:06.124 fused_ordering(254) 00:17:06.124 fused_ordering(255) 00:17:06.124 fused_ordering(256) 00:17:06.124 fused_ordering(257) 00:17:06.124 fused_ordering(258) 00:17:06.124 fused_ordering(259) 00:17:06.124 fused_ordering(260) 00:17:06.124 fused_ordering(261) 00:17:06.124 fused_ordering(262) 00:17:06.124 fused_ordering(263) 00:17:06.124 fused_ordering(264) 00:17:06.124 fused_ordering(265) 00:17:06.124 fused_ordering(266) 00:17:06.124 fused_ordering(267) 00:17:06.124 fused_ordering(268) 00:17:06.124 fused_ordering(269) 00:17:06.124 fused_ordering(270) 00:17:06.124 fused_ordering(271) 00:17:06.124 fused_ordering(272) 00:17:06.124 fused_ordering(273) 00:17:06.124 fused_ordering(274) 00:17:06.124 fused_ordering(275) 00:17:06.124 fused_ordering(276) 00:17:06.124 fused_ordering(277) 00:17:06.124 fused_ordering(278) 00:17:06.124 fused_ordering(279) 00:17:06.124 fused_ordering(280) 00:17:06.124 fused_ordering(281) 00:17:06.124 fused_ordering(282) 00:17:06.124 fused_ordering(283) 00:17:06.124 fused_ordering(284) 00:17:06.124 fused_ordering(285) 00:17:06.124 fused_ordering(286) 00:17:06.124 fused_ordering(287) 00:17:06.124 fused_ordering(288) 00:17:06.124 fused_ordering(289) 00:17:06.124 fused_ordering(290) 00:17:06.124 fused_ordering(291) 00:17:06.124 fused_ordering(292) 00:17:06.124 fused_ordering(293) 00:17:06.124 fused_ordering(294) 00:17:06.124 fused_ordering(295) 00:17:06.124 fused_ordering(296) 00:17:06.124 fused_ordering(297) 00:17:06.124 fused_ordering(298) 00:17:06.124 fused_ordering(299) 00:17:06.124 fused_ordering(300) 00:17:06.124 fused_ordering(301) 00:17:06.124 fused_ordering(302) 00:17:06.124 fused_ordering(303) 00:17:06.124 fused_ordering(304) 00:17:06.124 fused_ordering(305) 00:17:06.124 fused_ordering(306) 00:17:06.124 fused_ordering(307) 00:17:06.124 fused_ordering(308) 00:17:06.124 fused_ordering(309) 00:17:06.124 fused_ordering(310) 00:17:06.124 fused_ordering(311) 00:17:06.124 fused_ordering(312) 00:17:06.124 fused_ordering(313) 00:17:06.124 fused_ordering(314) 00:17:06.124 fused_ordering(315) 00:17:06.124 fused_ordering(316) 00:17:06.124 fused_ordering(317) 00:17:06.124 fused_ordering(318) 00:17:06.124 fused_ordering(319) 00:17:06.124 fused_ordering(320) 00:17:06.124 fused_ordering(321) 00:17:06.124 fused_ordering(322) 00:17:06.124 fused_ordering(323) 00:17:06.124 fused_ordering(324) 00:17:06.124 fused_ordering(325) 00:17:06.124 fused_ordering(326) 00:17:06.124 fused_ordering(327) 00:17:06.124 fused_ordering(328) 00:17:06.124 fused_ordering(329) 00:17:06.124 fused_ordering(330) 00:17:06.124 fused_ordering(331) 00:17:06.124 fused_ordering(332) 00:17:06.124 fused_ordering(333) 00:17:06.124 fused_ordering(334) 00:17:06.124 fused_ordering(335) 00:17:06.124 fused_ordering(336) 00:17:06.124 fused_ordering(337) 00:17:06.124 fused_ordering(338) 00:17:06.124 fused_ordering(339) 00:17:06.124 fused_ordering(340) 00:17:06.124 fused_ordering(341) 00:17:06.124 fused_ordering(342) 00:17:06.124 fused_ordering(343) 00:17:06.124 fused_ordering(344) 00:17:06.124 fused_ordering(345) 00:17:06.124 fused_ordering(346) 00:17:06.124 fused_ordering(347) 00:17:06.124 fused_ordering(348) 00:17:06.124 fused_ordering(349) 00:17:06.124 fused_ordering(350) 00:17:06.124 fused_ordering(351) 00:17:06.124 fused_ordering(352) 00:17:06.124 fused_ordering(353) 00:17:06.124 fused_ordering(354) 00:17:06.124 fused_ordering(355) 00:17:06.124 fused_ordering(356) 00:17:06.124 fused_ordering(357) 00:17:06.124 fused_ordering(358) 00:17:06.124 fused_ordering(359) 00:17:06.124 fused_ordering(360) 00:17:06.124 fused_ordering(361) 00:17:06.124 fused_ordering(362) 00:17:06.124 fused_ordering(363) 00:17:06.124 fused_ordering(364) 00:17:06.124 fused_ordering(365) 00:17:06.124 fused_ordering(366) 00:17:06.124 fused_ordering(367) 00:17:06.124 fused_ordering(368) 00:17:06.124 fused_ordering(369) 00:17:06.124 fused_ordering(370) 00:17:06.124 fused_ordering(371) 00:17:06.124 fused_ordering(372) 00:17:06.124 fused_ordering(373) 00:17:06.124 fused_ordering(374) 00:17:06.124 fused_ordering(375) 00:17:06.124 fused_ordering(376) 00:17:06.124 fused_ordering(377) 00:17:06.124 fused_ordering(378) 00:17:06.124 fused_ordering(379) 00:17:06.124 fused_ordering(380) 00:17:06.124 fused_ordering(381) 00:17:06.124 fused_ordering(382) 00:17:06.124 fused_ordering(383) 00:17:06.124 fused_ordering(384) 00:17:06.124 fused_ordering(385) 00:17:06.124 fused_ordering(386) 00:17:06.124 fused_ordering(387) 00:17:06.124 fused_ordering(388) 00:17:06.124 fused_ordering(389) 00:17:06.124 fused_ordering(390) 00:17:06.124 fused_ordering(391) 00:17:06.124 fused_ordering(392) 00:17:06.124 fused_ordering(393) 00:17:06.124 fused_ordering(394) 00:17:06.124 fused_ordering(395) 00:17:06.124 fused_ordering(396) 00:17:06.124 fused_ordering(397) 00:17:06.124 fused_ordering(398) 00:17:06.124 fused_ordering(399) 00:17:06.124 fused_ordering(400) 00:17:06.124 fused_ordering(401) 00:17:06.124 fused_ordering(402) 00:17:06.124 fused_ordering(403) 00:17:06.124 fused_ordering(404) 00:17:06.124 fused_ordering(405) 00:17:06.124 fused_ordering(406) 00:17:06.124 fused_ordering(407) 00:17:06.124 fused_ordering(408) 00:17:06.124 fused_ordering(409) 00:17:06.124 fused_ordering(410) 00:17:06.385 fused_ordering(411) 00:17:06.385 fused_ordering(412) 00:17:06.385 fused_ordering(413) 00:17:06.385 fused_ordering(414) 00:17:06.385 fused_ordering(415) 00:17:06.385 fused_ordering(416) 00:17:06.385 fused_ordering(417) 00:17:06.385 fused_ordering(418) 00:17:06.385 fused_ordering(419) 00:17:06.385 fused_ordering(420) 00:17:06.385 fused_ordering(421) 00:17:06.385 fused_ordering(422) 00:17:06.385 fused_ordering(423) 00:17:06.385 fused_ordering(424) 00:17:06.385 fused_ordering(425) 00:17:06.385 fused_ordering(426) 00:17:06.385 fused_ordering(427) 00:17:06.385 fused_ordering(428) 00:17:06.385 fused_ordering(429) 00:17:06.385 fused_ordering(430) 00:17:06.385 fused_ordering(431) 00:17:06.385 fused_ordering(432) 00:17:06.385 fused_ordering(433) 00:17:06.385 fused_ordering(434) 00:17:06.385 fused_ordering(435) 00:17:06.385 fused_ordering(436) 00:17:06.385 fused_ordering(437) 00:17:06.385 fused_ordering(438) 00:17:06.385 fused_ordering(439) 00:17:06.385 fused_ordering(440) 00:17:06.385 fused_ordering(441) 00:17:06.385 fused_ordering(442) 00:17:06.385 fused_ordering(443) 00:17:06.385 fused_ordering(444) 00:17:06.385 fused_ordering(445) 00:17:06.385 fused_ordering(446) 00:17:06.385 fused_ordering(447) 00:17:06.385 fused_ordering(448) 00:17:06.385 fused_ordering(449) 00:17:06.385 fused_ordering(450) 00:17:06.385 fused_ordering(451) 00:17:06.385 fused_ordering(452) 00:17:06.385 fused_ordering(453) 00:17:06.385 fused_ordering(454) 00:17:06.385 fused_ordering(455) 00:17:06.385 fused_ordering(456) 00:17:06.385 fused_ordering(457) 00:17:06.385 fused_ordering(458) 00:17:06.385 fused_ordering(459) 00:17:06.386 fused_ordering(460) 00:17:06.386 fused_ordering(461) 00:17:06.386 fused_ordering(462) 00:17:06.386 fused_ordering(463) 00:17:06.386 fused_ordering(464) 00:17:06.386 fused_ordering(465) 00:17:06.386 fused_ordering(466) 00:17:06.386 fused_ordering(467) 00:17:06.386 fused_ordering(468) 00:17:06.386 fused_ordering(469) 00:17:06.386 fused_ordering(470) 00:17:06.386 fused_ordering(471) 00:17:06.386 fused_ordering(472) 00:17:06.386 fused_ordering(473) 00:17:06.386 fused_ordering(474) 00:17:06.386 fused_ordering(475) 00:17:06.386 fused_ordering(476) 00:17:06.386 fused_ordering(477) 00:17:06.386 fused_ordering(478) 00:17:06.386 fused_ordering(479) 00:17:06.386 fused_ordering(480) 00:17:06.386 fused_ordering(481) 00:17:06.386 fused_ordering(482) 00:17:06.386 fused_ordering(483) 00:17:06.386 fused_ordering(484) 00:17:06.386 fused_ordering(485) 00:17:06.386 fused_ordering(486) 00:17:06.386 fused_ordering(487) 00:17:06.386 fused_ordering(488) 00:17:06.386 fused_ordering(489) 00:17:06.386 fused_ordering(490) 00:17:06.386 fused_ordering(491) 00:17:06.386 fused_ordering(492) 00:17:06.386 fused_ordering(493) 00:17:06.386 fused_ordering(494) 00:17:06.386 fused_ordering(495) 00:17:06.386 fused_ordering(496) 00:17:06.386 fused_ordering(497) 00:17:06.386 fused_ordering(498) 00:17:06.386 fused_ordering(499) 00:17:06.386 fused_ordering(500) 00:17:06.386 fused_ordering(501) 00:17:06.386 fused_ordering(502) 00:17:06.386 fused_ordering(503) 00:17:06.386 fused_ordering(504) 00:17:06.386 fused_ordering(505) 00:17:06.386 fused_ordering(506) 00:17:06.386 fused_ordering(507) 00:17:06.386 fused_ordering(508) 00:17:06.386 fused_ordering(509) 00:17:06.386 fused_ordering(510) 00:17:06.386 fused_ordering(511) 00:17:06.386 fused_ordering(512) 00:17:06.386 fused_ordering(513) 00:17:06.386 fused_ordering(514) 00:17:06.386 fused_ordering(515) 00:17:06.386 fused_ordering(516) 00:17:06.386 fused_ordering(517) 00:17:06.386 fused_ordering(518) 00:17:06.386 fused_ordering(519) 00:17:06.386 fused_ordering(520) 00:17:06.386 fused_ordering(521) 00:17:06.386 fused_ordering(522) 00:17:06.386 fused_ordering(523) 00:17:06.386 fused_ordering(524) 00:17:06.386 fused_ordering(525) 00:17:06.386 fused_ordering(526) 00:17:06.386 fused_ordering(527) 00:17:06.386 fused_ordering(528) 00:17:06.386 fused_ordering(529) 00:17:06.386 fused_ordering(530) 00:17:06.386 fused_ordering(531) 00:17:06.386 fused_ordering(532) 00:17:06.386 fused_ordering(533) 00:17:06.386 fused_ordering(534) 00:17:06.386 fused_ordering(535) 00:17:06.386 fused_ordering(536) 00:17:06.386 fused_ordering(537) 00:17:06.386 fused_ordering(538) 00:17:06.386 fused_ordering(539) 00:17:06.386 fused_ordering(540) 00:17:06.386 fused_ordering(541) 00:17:06.386 fused_ordering(542) 00:17:06.386 fused_ordering(543) 00:17:06.386 fused_ordering(544) 00:17:06.386 fused_ordering(545) 00:17:06.386 fused_ordering(546) 00:17:06.386 fused_ordering(547) 00:17:06.386 fused_ordering(548) 00:17:06.386 fused_ordering(549) 00:17:06.386 fused_ordering(550) 00:17:06.386 fused_ordering(551) 00:17:06.386 fused_ordering(552) 00:17:06.386 fused_ordering(553) 00:17:06.386 fused_ordering(554) 00:17:06.386 fused_ordering(555) 00:17:06.386 fused_ordering(556) 00:17:06.386 fused_ordering(557) 00:17:06.386 fused_ordering(558) 00:17:06.386 fused_ordering(559) 00:17:06.386 fused_ordering(560) 00:17:06.386 fused_ordering(561) 00:17:06.386 fused_ordering(562) 00:17:06.386 fused_ordering(563) 00:17:06.386 fused_ordering(564) 00:17:06.386 fused_ordering(565) 00:17:06.386 fused_ordering(566) 00:17:06.386 fused_ordering(567) 00:17:06.386 fused_ordering(568) 00:17:06.386 fused_ordering(569) 00:17:06.386 fused_ordering(570) 00:17:06.386 fused_ordering(571) 00:17:06.386 fused_ordering(572) 00:17:06.386 fused_ordering(573) 00:17:06.386 fused_ordering(574) 00:17:06.386 fused_ordering(575) 00:17:06.386 fused_ordering(576) 00:17:06.386 fused_ordering(577) 00:17:06.386 fused_ordering(578) 00:17:06.386 fused_ordering(579) 00:17:06.386 fused_ordering(580) 00:17:06.386 fused_ordering(581) 00:17:06.386 fused_ordering(582) 00:17:06.386 fused_ordering(583) 00:17:06.386 fused_ordering(584) 00:17:06.386 fused_ordering(585) 00:17:06.386 fused_ordering(586) 00:17:06.386 fused_ordering(587) 00:17:06.386 fused_ordering(588) 00:17:06.386 fused_ordering(589) 00:17:06.386 fused_ordering(590) 00:17:06.386 fused_ordering(591) 00:17:06.386 fused_ordering(592) 00:17:06.386 fused_ordering(593) 00:17:06.386 fused_ordering(594) 00:17:06.386 fused_ordering(595) 00:17:06.386 fused_ordering(596) 00:17:06.386 fused_ordering(597) 00:17:06.386 fused_ordering(598) 00:17:06.386 fused_ordering(599) 00:17:06.386 fused_ordering(600) 00:17:06.386 fused_ordering(601) 00:17:06.386 fused_ordering(602) 00:17:06.386 fused_ordering(603) 00:17:06.386 fused_ordering(604) 00:17:06.386 fused_ordering(605) 00:17:06.386 fused_ordering(606) 00:17:06.386 fused_ordering(607) 00:17:06.386 fused_ordering(608) 00:17:06.386 fused_ordering(609) 00:17:06.386 fused_ordering(610) 00:17:06.386 fused_ordering(611) 00:17:06.386 fused_ordering(612) 00:17:06.386 fused_ordering(613) 00:17:06.386 fused_ordering(614) 00:17:06.386 fused_ordering(615) 00:17:06.958 fused_ordering(616) 00:17:06.958 fused_ordering(617) 00:17:06.958 fused_ordering(618) 00:17:06.958 fused_ordering(619) 00:17:06.958 fused_ordering(620) 00:17:06.958 fused_ordering(621) 00:17:06.958 fused_ordering(622) 00:17:06.958 fused_ordering(623) 00:17:06.958 fused_ordering(624) 00:17:06.958 fused_ordering(625) 00:17:06.958 fused_ordering(626) 00:17:06.958 fused_ordering(627) 00:17:06.958 fused_ordering(628) 00:17:06.958 fused_ordering(629) 00:17:06.958 fused_ordering(630) 00:17:06.958 fused_ordering(631) 00:17:06.958 fused_ordering(632) 00:17:06.958 fused_ordering(633) 00:17:06.958 fused_ordering(634) 00:17:06.958 fused_ordering(635) 00:17:06.958 fused_ordering(636) 00:17:06.958 fused_ordering(637) 00:17:06.958 fused_ordering(638) 00:17:06.958 fused_ordering(639) 00:17:06.958 fused_ordering(640) 00:17:06.958 fused_ordering(641) 00:17:06.958 fused_ordering(642) 00:17:06.958 fused_ordering(643) 00:17:06.958 fused_ordering(644) 00:17:06.958 fused_ordering(645) 00:17:06.958 fused_ordering(646) 00:17:06.958 fused_ordering(647) 00:17:06.958 fused_ordering(648) 00:17:06.958 fused_ordering(649) 00:17:06.958 fused_ordering(650) 00:17:06.958 fused_ordering(651) 00:17:06.958 fused_ordering(652) 00:17:06.958 fused_ordering(653) 00:17:06.958 fused_ordering(654) 00:17:06.958 fused_ordering(655) 00:17:06.958 fused_ordering(656) 00:17:06.958 fused_ordering(657) 00:17:06.958 fused_ordering(658) 00:17:06.958 fused_ordering(659) 00:17:06.958 fused_ordering(660) 00:17:06.958 fused_ordering(661) 00:17:06.958 fused_ordering(662) 00:17:06.958 fused_ordering(663) 00:17:06.958 fused_ordering(664) 00:17:06.958 fused_ordering(665) 00:17:06.958 fused_ordering(666) 00:17:06.958 fused_ordering(667) 00:17:06.958 fused_ordering(668) 00:17:06.958 fused_ordering(669) 00:17:06.958 fused_ordering(670) 00:17:06.958 fused_ordering(671) 00:17:06.958 fused_ordering(672) 00:17:06.958 fused_ordering(673) 00:17:06.958 fused_ordering(674) 00:17:06.958 fused_ordering(675) 00:17:06.958 fused_ordering(676) 00:17:06.958 fused_ordering(677) 00:17:06.958 fused_ordering(678) 00:17:06.958 fused_ordering(679) 00:17:06.958 fused_ordering(680) 00:17:06.958 fused_ordering(681) 00:17:06.958 fused_ordering(682) 00:17:06.958 fused_ordering(683) 00:17:06.958 fused_ordering(684) 00:17:06.958 fused_ordering(685) 00:17:06.958 fused_ordering(686) 00:17:06.958 fused_ordering(687) 00:17:06.958 fused_ordering(688) 00:17:06.958 fused_ordering(689) 00:17:06.958 fused_ordering(690) 00:17:06.958 fused_ordering(691) 00:17:06.958 fused_ordering(692) 00:17:06.958 fused_ordering(693) 00:17:06.958 fused_ordering(694) 00:17:06.958 fused_ordering(695) 00:17:06.958 fused_ordering(696) 00:17:06.958 fused_ordering(697) 00:17:06.958 fused_ordering(698) 00:17:06.958 fused_ordering(699) 00:17:06.958 fused_ordering(700) 00:17:06.958 fused_ordering(701) 00:17:06.958 fused_ordering(702) 00:17:06.958 fused_ordering(703) 00:17:06.958 fused_ordering(704) 00:17:06.958 fused_ordering(705) 00:17:06.958 fused_ordering(706) 00:17:06.958 fused_ordering(707) 00:17:06.958 fused_ordering(708) 00:17:06.958 fused_ordering(709) 00:17:06.958 fused_ordering(710) 00:17:06.958 fused_ordering(711) 00:17:06.958 fused_ordering(712) 00:17:06.958 fused_ordering(713) 00:17:06.958 fused_ordering(714) 00:17:06.958 fused_ordering(715) 00:17:06.958 fused_ordering(716) 00:17:06.958 fused_ordering(717) 00:17:06.958 fused_ordering(718) 00:17:06.958 fused_ordering(719) 00:17:06.958 fused_ordering(720) 00:17:06.958 fused_ordering(721) 00:17:06.958 fused_ordering(722) 00:17:06.958 fused_ordering(723) 00:17:06.958 fused_ordering(724) 00:17:06.958 fused_ordering(725) 00:17:06.958 fused_ordering(726) 00:17:06.958 fused_ordering(727) 00:17:06.958 fused_ordering(728) 00:17:06.958 fused_ordering(729) 00:17:06.958 fused_ordering(730) 00:17:06.958 fused_ordering(731) 00:17:06.958 fused_ordering(732) 00:17:06.958 fused_ordering(733) 00:17:06.958 fused_ordering(734) 00:17:06.958 fused_ordering(735) 00:17:06.958 fused_ordering(736) 00:17:06.958 fused_ordering(737) 00:17:06.958 fused_ordering(738) 00:17:06.958 fused_ordering(739) 00:17:06.958 fused_ordering(740) 00:17:06.958 fused_ordering(741) 00:17:06.958 fused_ordering(742) 00:17:06.958 fused_ordering(743) 00:17:06.958 fused_ordering(744) 00:17:06.958 fused_ordering(745) 00:17:06.958 fused_ordering(746) 00:17:06.958 fused_ordering(747) 00:17:06.958 fused_ordering(748) 00:17:06.958 fused_ordering(749) 00:17:06.958 fused_ordering(750) 00:17:06.958 fused_ordering(751) 00:17:06.958 fused_ordering(752) 00:17:06.958 fused_ordering(753) 00:17:06.958 fused_ordering(754) 00:17:06.958 fused_ordering(755) 00:17:06.958 fused_ordering(756) 00:17:06.958 fused_ordering(757) 00:17:06.958 fused_ordering(758) 00:17:06.958 fused_ordering(759) 00:17:06.958 fused_ordering(760) 00:17:06.958 fused_ordering(761) 00:17:06.958 fused_ordering(762) 00:17:06.958 fused_ordering(763) 00:17:06.958 fused_ordering(764) 00:17:06.958 fused_ordering(765) 00:17:06.958 fused_ordering(766) 00:17:06.958 fused_ordering(767) 00:17:06.958 fused_ordering(768) 00:17:06.959 fused_ordering(769) 00:17:06.959 fused_ordering(770) 00:17:06.959 fused_ordering(771) 00:17:06.959 fused_ordering(772) 00:17:06.959 fused_ordering(773) 00:17:06.959 fused_ordering(774) 00:17:06.959 fused_ordering(775) 00:17:06.959 fused_ordering(776) 00:17:06.959 fused_ordering(777) 00:17:06.959 fused_ordering(778) 00:17:06.959 fused_ordering(779) 00:17:06.959 fused_ordering(780) 00:17:06.959 fused_ordering(781) 00:17:06.959 fused_ordering(782) 00:17:06.959 fused_ordering(783) 00:17:06.959 fused_ordering(784) 00:17:06.959 fused_ordering(785) 00:17:06.959 fused_ordering(786) 00:17:06.959 fused_ordering(787) 00:17:06.959 fused_ordering(788) 00:17:06.959 fused_ordering(789) 00:17:06.959 fused_ordering(790) 00:17:06.959 fused_ordering(791) 00:17:06.959 fused_ordering(792) 00:17:06.959 fused_ordering(793) 00:17:06.959 fused_ordering(794) 00:17:06.959 fused_ordering(795) 00:17:06.959 fused_ordering(796) 00:17:06.959 fused_ordering(797) 00:17:06.959 fused_ordering(798) 00:17:06.959 fused_ordering(799) 00:17:06.959 fused_ordering(800) 00:17:06.959 fused_ordering(801) 00:17:06.959 fused_ordering(802) 00:17:06.959 fused_ordering(803) 00:17:06.959 fused_ordering(804) 00:17:06.959 fused_ordering(805) 00:17:06.959 fused_ordering(806) 00:17:06.959 fused_ordering(807) 00:17:06.959 fused_ordering(808) 00:17:06.959 fused_ordering(809) 00:17:06.959 fused_ordering(810) 00:17:06.959 fused_ordering(811) 00:17:06.959 fused_ordering(812) 00:17:06.959 fused_ordering(813) 00:17:06.959 fused_ordering(814) 00:17:06.959 fused_ordering(815) 00:17:06.959 fused_ordering(816) 00:17:06.959 fused_ordering(817) 00:17:06.959 fused_ordering(818) 00:17:06.959 fused_ordering(819) 00:17:06.959 fused_ordering(820) 00:17:07.901 fused_ordering(821) 00:17:07.901 fused_ordering(822) 00:17:07.901 fused_ordering(823) 00:17:07.901 fused_ordering(824) 00:17:07.901 fused_ordering(825) 00:17:07.902 fused_ordering(826) 00:17:07.902 fused_ordering(827) 00:17:07.902 fused_ordering(828) 00:17:07.902 fused_ordering(829) 00:17:07.902 fused_ordering(830) 00:17:07.902 fused_ordering(831) 00:17:07.902 fused_ordering(832) 00:17:07.902 fused_ordering(833) 00:17:07.902 fused_ordering(834) 00:17:07.902 fused_ordering(835) 00:17:07.902 fused_ordering(836) 00:17:07.902 fused_ordering(837) 00:17:07.902 fused_ordering(838) 00:17:07.902 fused_ordering(839) 00:17:07.902 fused_ordering(840) 00:17:07.902 fused_ordering(841) 00:17:07.902 fused_ordering(842) 00:17:07.902 fused_ordering(843) 00:17:07.902 fused_ordering(844) 00:17:07.902 fused_ordering(845) 00:17:07.902 fused_ordering(846) 00:17:07.902 fused_ordering(847) 00:17:07.902 fused_ordering(848) 00:17:07.902 fused_ordering(849) 00:17:07.902 fused_ordering(850) 00:17:07.902 fused_ordering(851) 00:17:07.902 fused_ordering(852) 00:17:07.902 fused_ordering(853) 00:17:07.902 fused_ordering(854) 00:17:07.902 fused_ordering(855) 00:17:07.902 fused_ordering(856) 00:17:07.902 fused_ordering(857) 00:17:07.902 fused_ordering(858) 00:17:07.902 fused_ordering(859) 00:17:07.902 fused_ordering(860) 00:17:07.902 fused_ordering(861) 00:17:07.902 fused_ordering(862) 00:17:07.902 fused_ordering(863) 00:17:07.902 fused_ordering(864) 00:17:07.902 fused_ordering(865) 00:17:07.902 fused_ordering(866) 00:17:07.902 fused_ordering(867) 00:17:07.902 fused_ordering(868) 00:17:07.902 fused_ordering(869) 00:17:07.902 fused_ordering(870) 00:17:07.902 fused_ordering(871) 00:17:07.902 fused_ordering(872) 00:17:07.902 fused_ordering(873) 00:17:07.902 fused_ordering(874) 00:17:07.902 fused_ordering(875) 00:17:07.902 fused_ordering(876) 00:17:07.902 fused_ordering(877) 00:17:07.902 fused_ordering(878) 00:17:07.902 fused_ordering(879) 00:17:07.902 fused_ordering(880) 00:17:07.902 fused_ordering(881) 00:17:07.902 fused_ordering(882) 00:17:07.902 fused_ordering(883) 00:17:07.902 fused_ordering(884) 00:17:07.902 fused_ordering(885) 00:17:07.902 fused_ordering(886) 00:17:07.902 fused_ordering(887) 00:17:07.902 fused_ordering(888) 00:17:07.902 fused_ordering(889) 00:17:07.902 fused_ordering(890) 00:17:07.902 fused_ordering(891) 00:17:07.902 fused_ordering(892) 00:17:07.902 fused_ordering(893) 00:17:07.902 fused_ordering(894) 00:17:07.902 fused_ordering(895) 00:17:07.902 fused_ordering(896) 00:17:07.902 fused_ordering(897) 00:17:07.902 fused_ordering(898) 00:17:07.902 fused_ordering(899) 00:17:07.902 fused_ordering(900) 00:17:07.902 fused_ordering(901) 00:17:07.902 fused_ordering(902) 00:17:07.902 fused_ordering(903) 00:17:07.902 fused_ordering(904) 00:17:07.902 fused_ordering(905) 00:17:07.902 fused_ordering(906) 00:17:07.902 fused_ordering(907) 00:17:07.902 fused_ordering(908) 00:17:07.902 fused_ordering(909) 00:17:07.902 fused_ordering(910) 00:17:07.902 fused_ordering(911) 00:17:07.902 fused_ordering(912) 00:17:07.902 fused_ordering(913) 00:17:07.902 fused_ordering(914) 00:17:07.902 fused_ordering(915) 00:17:07.902 fused_ordering(916) 00:17:07.902 fused_ordering(917) 00:17:07.902 fused_ordering(918) 00:17:07.902 fused_ordering(919) 00:17:07.902 fused_ordering(920) 00:17:07.902 fused_ordering(921) 00:17:07.902 fused_ordering(922) 00:17:07.902 fused_ordering(923) 00:17:07.902 fused_ordering(924) 00:17:07.902 fused_ordering(925) 00:17:07.902 fused_ordering(926) 00:17:07.902 fused_ordering(927) 00:17:07.902 fused_ordering(928) 00:17:07.902 fused_ordering(929) 00:17:07.902 fused_ordering(930) 00:17:07.902 fused_ordering(931) 00:17:07.902 fused_ordering(932) 00:17:07.902 fused_ordering(933) 00:17:07.902 fused_ordering(934) 00:17:07.902 fused_ordering(935) 00:17:07.902 fused_ordering(936) 00:17:07.902 fused_ordering(937) 00:17:07.902 fused_ordering(938) 00:17:07.902 fused_ordering(939) 00:17:07.902 fused_ordering(940) 00:17:07.902 fused_ordering(941) 00:17:07.902 fused_ordering(942) 00:17:07.902 fused_ordering(943) 00:17:07.902 fused_ordering(944) 00:17:07.902 fused_ordering(945) 00:17:07.902 fused_ordering(946) 00:17:07.902 fused_ordering(947) 00:17:07.902 fused_ordering(948) 00:17:07.902 fused_ordering(949) 00:17:07.902 fused_ordering(950) 00:17:07.902 fused_ordering(951) 00:17:07.902 fused_ordering(952) 00:17:07.902 fused_ordering(953) 00:17:07.902 fused_ordering(954) 00:17:07.902 fused_ordering(955) 00:17:07.902 fused_ordering(956) 00:17:07.902 fused_ordering(957) 00:17:07.902 fused_ordering(958) 00:17:07.902 fused_ordering(959) 00:17:07.902 fused_ordering(960) 00:17:07.902 fused_ordering(961) 00:17:07.902 fused_ordering(962) 00:17:07.902 fused_ordering(963) 00:17:07.902 fused_ordering(964) 00:17:07.902 fused_ordering(965) 00:17:07.902 fused_ordering(966) 00:17:07.902 fused_ordering(967) 00:17:07.902 fused_ordering(968) 00:17:07.902 fused_ordering(969) 00:17:07.902 fused_ordering(970) 00:17:07.902 fused_ordering(971) 00:17:07.902 fused_ordering(972) 00:17:07.902 fused_ordering(973) 00:17:07.902 fused_ordering(974) 00:17:07.902 fused_ordering(975) 00:17:07.902 fused_ordering(976) 00:17:07.902 fused_ordering(977) 00:17:07.902 fused_ordering(978) 00:17:07.902 fused_ordering(979) 00:17:07.902 fused_ordering(980) 00:17:07.902 fused_ordering(981) 00:17:07.902 fused_ordering(982) 00:17:07.902 fused_ordering(983) 00:17:07.902 fused_ordering(984) 00:17:07.902 fused_ordering(985) 00:17:07.902 fused_ordering(986) 00:17:07.902 fused_ordering(987) 00:17:07.902 fused_ordering(988) 00:17:07.902 fused_ordering(989) 00:17:07.902 fused_ordering(990) 00:17:07.902 fused_ordering(991) 00:17:07.902 fused_ordering(992) 00:17:07.902 fused_ordering(993) 00:17:07.902 fused_ordering(994) 00:17:07.902 fused_ordering(995) 00:17:07.902 fused_ordering(996) 00:17:07.902 fused_ordering(997) 00:17:07.902 fused_ordering(998) 00:17:07.902 fused_ordering(999) 00:17:07.902 fused_ordering(1000) 00:17:07.902 fused_ordering(1001) 00:17:07.902 fused_ordering(1002) 00:17:07.902 fused_ordering(1003) 00:17:07.902 fused_ordering(1004) 00:17:07.902 fused_ordering(1005) 00:17:07.902 fused_ordering(1006) 00:17:07.902 fused_ordering(1007) 00:17:07.902 fused_ordering(1008) 00:17:07.902 fused_ordering(1009) 00:17:07.902 fused_ordering(1010) 00:17:07.902 fused_ordering(1011) 00:17:07.902 fused_ordering(1012) 00:17:07.902 fused_ordering(1013) 00:17:07.902 fused_ordering(1014) 00:17:07.902 fused_ordering(1015) 00:17:07.902 fused_ordering(1016) 00:17:07.902 fused_ordering(1017) 00:17:07.902 fused_ordering(1018) 00:17:07.902 fused_ordering(1019) 00:17:07.902 fused_ordering(1020) 00:17:07.902 fused_ordering(1021) 00:17:07.902 fused_ordering(1022) 00:17:07.902 fused_ordering(1023) 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:07.902 rmmod nvme_tcp 00:17:07.902 rmmod nvme_fabrics 00:17:07.902 rmmod nvme_keyring 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 958338 ']' 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 958338 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 958338 ']' 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 958338 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 958338 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 958338' 00:17:07.902 killing process with pid 958338 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 958338 00:17:07.902 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 958338 00:17:07.902 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:07.903 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:07.903 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:07.903 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:07.903 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:07.903 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:07.903 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:07.903 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:07.903 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:07.903 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.903 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.903 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:10.448 00:17:10.448 real 0m13.574s 00:17:10.448 user 0m7.068s 00:17:10.448 sys 0m7.437s 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:10.448 ************************************ 00:17:10.448 END TEST nvmf_fused_ordering 00:17:10.448 ************************************ 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:10.448 ************************************ 00:17:10.448 START TEST nvmf_ns_masking 00:17:10.448 ************************************ 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:10.448 * Looking for test storage... 00:17:10.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:10.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.448 --rc genhtml_branch_coverage=1 00:17:10.448 --rc genhtml_function_coverage=1 00:17:10.448 --rc genhtml_legend=1 00:17:10.448 --rc geninfo_all_blocks=1 00:17:10.448 --rc geninfo_unexecuted_blocks=1 00:17:10.448 00:17:10.448 ' 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:10.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.448 --rc genhtml_branch_coverage=1 00:17:10.448 --rc genhtml_function_coverage=1 00:17:10.448 --rc genhtml_legend=1 00:17:10.448 --rc geninfo_all_blocks=1 00:17:10.448 --rc geninfo_unexecuted_blocks=1 00:17:10.448 00:17:10.448 ' 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:10.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.448 --rc genhtml_branch_coverage=1 00:17:10.448 --rc genhtml_function_coverage=1 00:17:10.448 --rc genhtml_legend=1 00:17:10.448 --rc geninfo_all_blocks=1 00:17:10.448 --rc geninfo_unexecuted_blocks=1 00:17:10.448 00:17:10.448 ' 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:10.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.448 --rc genhtml_branch_coverage=1 00:17:10.448 --rc genhtml_function_coverage=1 00:17:10.448 --rc genhtml_legend=1 00:17:10.448 --rc geninfo_all_blocks=1 00:17:10.448 --rc geninfo_unexecuted_blocks=1 00:17:10.448 00:17:10.448 ' 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:10.448 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:10.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=0fbb4a9b-3a22-486b-b8a0-402c25078026 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=e1f556f1-2664-4836-9e7d-ebcbb0feef7b 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ae907b80-07d7-452b-bb9b-4ea2b1d35f51 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:10.449 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:18.633 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:18.633 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.633 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:18.634 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:18.634 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:18.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:17:18.634 00:17:18.634 --- 10.0.0.2 ping statistics --- 00:17:18.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.634 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:18.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:17:18.634 00:17:18.634 --- 10.0.0.1 ping statistics --- 00:17:18.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.634 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:18.634 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:18.634 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=963192 00:17:18.634 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 963192 00:17:18.634 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:18.634 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 963192 ']' 00:17:18.634 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.634 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.634 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.634 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.634 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:18.634 [2024-11-19 10:44:57.059880] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:17:18.634 [2024-11-19 10:44:57.059950] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.634 [2024-11-19 10:44:57.157829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.634 [2024-11-19 10:44:57.208605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.634 [2024-11-19 10:44:57.208654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.635 [2024-11-19 10:44:57.208663] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.635 [2024-11-19 10:44:57.208670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.635 [2024-11-19 10:44:57.208676] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.635 [2024-11-19 10:44:57.209451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.895 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.895 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:18.895 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:18.895 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:18.895 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:18.895 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.895 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:19.155 [2024-11-19 10:44:58.099785] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.155 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:19.155 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:19.155 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:19.155 Malloc1 00:17:19.155 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:19.414 Malloc2 00:17:19.414 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:19.675 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:19.936 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.936 [2024-11-19 10:44:59.127869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.200 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:20.200 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ae907b80-07d7-452b-bb9b-4ea2b1d35f51 -a 10.0.0.2 -s 4420 -i 4 00:17:20.200 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:20.200 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:20.200 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:20.200 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:20.200 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:22.745 [ 0]:0x1 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=22c870e1e034412f917fd24ed910e053 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 22c870e1e034412f917fd24ed910e053 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:22.745 [ 0]:0x1 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=22c870e1e034412f917fd24ed910e053 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 22c870e1e034412f917fd24ed910e053 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:22.745 [ 1]:0x2 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=958618ab916f4b7d8e143156e65163e7 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 958618ab916f4b7d8e143156e65163e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:22.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.745 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:23.005 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:23.005 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:23.005 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ae907b80-07d7-452b-bb9b-4ea2b1d35f51 -a 10.0.0.2 -s 4420 -i 4 00:17:23.266 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:23.266 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:23.266 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:23.266 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:23.266 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:23.266 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:25.808 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:25.809 [ 0]:0x2 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=958618ab916f4b7d8e143156e65163e7 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 958618ab916f4b7d8e143156e65163e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:25.809 [ 0]:0x1 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=22c870e1e034412f917fd24ed910e053 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 22c870e1e034412f917fd24ed910e053 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:25.809 [ 1]:0x2 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=958618ab916f4b7d8e143156e65163e7 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 958618ab916f4b7d8e143156e65163e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.809 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:26.069 [ 0]:0x2 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=958618ab916f4b7d8e143156e65163e7 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 958618ab916f4b7d8e143156e65163e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:26.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.069 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:26.330 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:26.330 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ae907b80-07d7-452b-bb9b-4ea2b1d35f51 -a 10.0.0.2 -s 4420 -i 4 00:17:26.590 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:26.590 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:26.590 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:26.590 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:26.590 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:26.590 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:28.500 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:28.500 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:28.500 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:28.500 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:28.500 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:28.500 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:28.500 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:28.500 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:28.760 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:28.760 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:28.760 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:28.760 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:28.760 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:28.760 [ 0]:0x1 00:17:28.760 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:28.760 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:28.760 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=22c870e1e034412f917fd24ed910e053 00:17:28.760 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 22c870e1e034412f917fd24ed910e053 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:28.760 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:28.760 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:28.760 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:28.760 [ 1]:0x2 00:17:28.760 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:28.761 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:29.021 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=958618ab916f4b7d8e143156e65163e7 00:17:29.021 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 958618ab916f4b7d8e143156e65163e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:29.021 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:29.021 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:29.282 [ 0]:0x2 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=958618ab916f4b7d8e143156e65163e7 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 958618ab916f4b7d8e143156e65163e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:29.282 [2024-11-19 10:45:08.413430] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:29.282 request: 00:17:29.282 { 00:17:29.282 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.282 "nsid": 2, 00:17:29.282 "host": "nqn.2016-06.io.spdk:host1", 00:17:29.282 "method": "nvmf_ns_remove_host", 00:17:29.282 "req_id": 1 00:17:29.282 } 00:17:29.282 Got JSON-RPC error response 00:17:29.282 response: 00:17:29.282 { 00:17:29.282 "code": -32602, 00:17:29.282 "message": "Invalid parameters" 00:17:29.282 } 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:29.282 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:29.283 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:29.283 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:29.283 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.283 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:29.283 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.283 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:29.283 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:29.283 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:29.283 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:29.283 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:29.544 [ 0]:0x2 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=958618ab916f4b7d8e143156e65163e7 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 958618ab916f4b7d8e143156e65163e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:29.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=965653 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 965653 /var/tmp/host.sock 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 965653 ']' 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:29.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.544 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:29.544 [2024-11-19 10:45:08.671941] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:17:29.544 [2024-11-19 10:45:08.671991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid965653 ] 00:17:29.805 [2024-11-19 10:45:08.759617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.805 [2024-11-19 10:45:08.795130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.376 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.376 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:30.376 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:30.636 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:30.896 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 0fbb4a9b-3a22-486b-b8a0-402c25078026 00:17:30.896 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:30.896 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0FBB4A9B3A22486BB8A0402C25078026 -i 00:17:30.896 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid e1f556f1-2664-4836-9e7d-ebcbb0feef7b 00:17:30.896 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:30.897 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g E1F556F1266448369E7DEBCBB0FEEF7B -i 00:17:31.156 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:31.416 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:31.677 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:31.677 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:31.938 nvme0n1 00:17:31.938 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:31.938 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:32.198 nvme1n2 00:17:32.198 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:32.198 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:32.198 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:32.198 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:32.198 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:32.198 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:32.198 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:32.198 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:32.198 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:32.458 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 0fbb4a9b-3a22-486b-b8a0-402c25078026 == \0\f\b\b\4\a\9\b\-\3\a\2\2\-\4\8\6\b\-\b\8\a\0\-\4\0\2\c\2\5\0\7\8\0\2\6 ]] 00:17:32.458 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:32.458 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:32.458 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:32.718 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ e1f556f1-2664-4836-9e7d-ebcbb0feef7b == \e\1\f\5\5\6\f\1\-\2\6\6\4\-\4\8\3\6\-\9\e\7\d\-\e\b\c\b\b\0\f\e\e\f\7\b ]] 00:17:32.718 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:32.718 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:32.979 [2024-11-19 10:45:12.035601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:2 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.979 [2024-11-19 10:45:12.035636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:32.979 [2024-11-19 10:45:12.035650] nvme_ns.c: 287:nvme_ctrlr_identify_id_desc: *WARNING*: Failed to retrieve NS ID Descriptor List 00:17:32.979 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 0fbb4a9b-3a22-486b-b8a0-402c25078026 00:17:32.979 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:32.979 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0FBB4A9B3A22486BB8A0402C25078026 00:17:32.979 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:32.979 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0FBB4A9B3A22486BB8A0402C25078026 00:17:32.979 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.979 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.979 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.979 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.979 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.979 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.979 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.979 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:32.979 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0FBB4A9B3A22486BB8A0402C25078026 00:17:33.240 [2024-11-19 10:45:12.203746] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:33.240 [2024-11-19 10:45:12.203770] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:33.240 [2024-11-19 10:45:12.203777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.240 request: 00:17:33.240 { 00:17:33.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.240 "namespace": { 00:17:33.241 "bdev_name": "invalid", 00:17:33.241 "nsid": 1, 00:17:33.241 "nguid": "0FBB4A9B3A22486BB8A0402C25078026", 00:17:33.241 "no_auto_visible": false 00:17:33.241 }, 00:17:33.241 "method": "nvmf_subsystem_add_ns", 00:17:33.241 "req_id": 1 00:17:33.241 } 00:17:33.241 Got JSON-RPC error response 00:17:33.241 response: 00:17:33.241 { 00:17:33.241 "code": -32602, 00:17:33.241 "message": "Invalid parameters" 00:17:33.241 } 00:17:33.241 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:33.241 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:33.241 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:33.241 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:33.241 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 0fbb4a9b-3a22-486b-b8a0-402c25078026 00:17:33.241 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:33.241 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0FBB4A9B3A22486BB8A0402C25078026 -i 00:17:33.241 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:35.787 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:35.787 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:35.787 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:35.787 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:35.787 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 965653 00:17:35.787 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 965653 ']' 00:17:35.787 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 965653 00:17:35.787 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:35.787 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.787 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 965653 00:17:35.787 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:35.787 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:35.787 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 965653' 00:17:35.787 killing process with pid 965653 00:17:35.787 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 965653 00:17:35.787 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 965653 00:17:35.787 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:36.047 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:36.047 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:36.047 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:36.047 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:36.047 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:36.047 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:36.047 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:36.047 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:36.047 rmmod nvme_tcp 00:17:36.047 rmmod nvme_fabrics 00:17:36.047 rmmod nvme_keyring 00:17:36.047 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:36.047 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:36.047 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:36.047 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 963192 ']' 00:17:36.047 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 963192 00:17:36.047 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 963192 ']' 00:17:36.047 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 963192 00:17:36.047 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:36.048 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.048 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 963192 00:17:36.048 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:36.048 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:36.048 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 963192' 00:17:36.048 killing process with pid 963192 00:17:36.048 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 963192 00:17:36.048 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 963192 00:17:36.307 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:36.308 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:36.308 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:36.308 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:36.308 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:36.308 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:36.308 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:36.308 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:36.308 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:36.308 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.308 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.308 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.220 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:38.220 00:17:38.220 real 0m28.100s 00:17:38.220 user 0m31.807s 00:17:38.220 sys 0m8.301s 00:17:38.220 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:38.220 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:38.220 ************************************ 00:17:38.220 END TEST nvmf_ns_masking 00:17:38.220 ************************************ 00:17:38.220 10:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:38.220 10:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:38.220 10:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:38.220 10:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:38.220 10:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:38.481 ************************************ 00:17:38.481 START TEST nvmf_nvme_cli 00:17:38.481 ************************************ 00:17:38.481 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:38.481 * Looking for test storage... 00:17:38.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:38.481 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:38.481 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:17:38.481 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:38.481 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:38.481 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:38.481 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:38.481 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:38.481 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:38.481 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:38.481 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:38.481 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:38.481 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:38.481 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:38.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.482 --rc genhtml_branch_coverage=1 00:17:38.482 --rc genhtml_function_coverage=1 00:17:38.482 --rc genhtml_legend=1 00:17:38.482 --rc geninfo_all_blocks=1 00:17:38.482 --rc geninfo_unexecuted_blocks=1 00:17:38.482 00:17:38.482 ' 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:38.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.482 --rc genhtml_branch_coverage=1 00:17:38.482 --rc genhtml_function_coverage=1 00:17:38.482 --rc genhtml_legend=1 00:17:38.482 --rc geninfo_all_blocks=1 00:17:38.482 --rc geninfo_unexecuted_blocks=1 00:17:38.482 00:17:38.482 ' 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:38.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.482 --rc genhtml_branch_coverage=1 00:17:38.482 --rc genhtml_function_coverage=1 00:17:38.482 --rc genhtml_legend=1 00:17:38.482 --rc geninfo_all_blocks=1 00:17:38.482 --rc geninfo_unexecuted_blocks=1 00:17:38.482 00:17:38.482 ' 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:38.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.482 --rc genhtml_branch_coverage=1 00:17:38.482 --rc genhtml_function_coverage=1 00:17:38.482 --rc genhtml_legend=1 00:17:38.482 --rc geninfo_all_blocks=1 00:17:38.482 --rc geninfo_unexecuted_blocks=1 00:17:38.482 00:17:38.482 ' 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:38.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:38.482 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:38.483 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:38.483 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:38.483 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:38.483 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:38.483 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:38.483 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.483 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:38.483 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:38.483 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:38.483 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.483 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.483 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.744 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:38.744 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:38.744 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:38.744 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:46.892 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.892 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:46.892 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:46.892 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:46.892 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:46.892 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:46.892 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:46.892 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:46.892 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:46.892 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:46.892 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:46.892 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:46.892 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:46.892 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:46.892 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:46.892 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:46.893 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:46.893 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:46.893 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:46.893 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:46.893 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.893 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.893 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.893 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:46.893 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:46.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:17:46.893 00:17:46.893 --- 10.0.0.2 ping statistics --- 00:17:46.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.893 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:17:46.893 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:17:46.893 00:17:46.893 --- 10.0.0.1 ping statistics --- 00:17:46.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.893 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=971670 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 971670 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 971670 ']' 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.894 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:46.894 [2024-11-19 10:45:25.188604] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:17:46.894 [2024-11-19 10:45:25.188666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.894 [2024-11-19 10:45:25.288521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:46.894 [2024-11-19 10:45:25.342935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.894 [2024-11-19 10:45:25.342990] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.894 [2024-11-19 10:45:25.342999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.894 [2024-11-19 10:45:25.343007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.894 [2024-11-19 10:45:25.343014] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.894 [2024-11-19 10:45:25.345381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.894 [2024-11-19 10:45:25.345541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.894 [2024-11-19 10:45:25.345704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.894 [2024-11-19 10:45:25.345704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:46.894 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.894 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:17:46.894 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:46.894 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:46.894 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:46.894 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.894 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:46.894 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.894 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:46.894 [2024-11-19 10:45:26.065888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.894 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.894 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:46.894 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.894 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 Malloc0 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 Malloc1 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 [2024-11-19 10:45:26.182385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.156 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:17:47.418 00:17:47.418 Discovery Log Number of Records 2, Generation counter 2 00:17:47.418 =====Discovery Log Entry 0====== 00:17:47.418 trtype: tcp 00:17:47.418 adrfam: ipv4 00:17:47.418 subtype: current discovery subsystem 00:17:47.418 treq: not required 00:17:47.418 portid: 0 00:17:47.418 trsvcid: 4420 00:17:47.418 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:47.418 traddr: 10.0.0.2 00:17:47.418 eflags: explicit discovery connections, duplicate discovery information 00:17:47.418 sectype: none 00:17:47.418 =====Discovery Log Entry 1====== 00:17:47.418 trtype: tcp 00:17:47.418 adrfam: ipv4 00:17:47.418 subtype: nvme subsystem 00:17:47.418 treq: not required 00:17:47.418 portid: 0 00:17:47.418 trsvcid: 4420 00:17:47.418 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:47.418 traddr: 10.0.0.2 00:17:47.418 eflags: none 00:17:47.418 sectype: none 00:17:47.418 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:47.418 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:47.418 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:47.418 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:47.418 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:47.418 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:47.418 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:47.418 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:47.418 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:47.418 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:47.418 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:48.812 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:48.812 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:17:48.812 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:48.812 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:48.813 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:48.813 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:17:50.725 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:50.725 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:50.725 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:50.986 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:50.986 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:50.986 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:17:50.986 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:50.986 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:50.986 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:50.986 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:50.986 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:50.986 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:50.987 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:50.987 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:50.987 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:50.987 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:50.987 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:50.987 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:50.987 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:50.987 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:50.987 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:50.987 /dev/nvme0n2 ]] 00:17:50.987 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:50.987 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:50.987 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:50.987 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:50.987 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:51.247 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:51.247 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:51.247 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:51.247 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:51.247 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:51.247 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:51.247 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:51.247 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:51.247 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:51.247 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:51.247 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:51.247 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:51.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.508 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:51.508 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:17:51.508 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:51.508 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.508 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:51.508 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.508 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:17:51.508 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:51.508 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.508 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.508 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:51.508 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.508 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:51.508 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:51.508 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:51.508 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:51.508 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:51.508 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:51.509 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:51.509 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:51.509 rmmod nvme_tcp 00:17:51.509 rmmod nvme_fabrics 00:17:51.509 rmmod nvme_keyring 00:17:51.509 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:51.509 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:51.509 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:51.509 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 971670 ']' 00:17:51.509 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 971670 00:17:51.509 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 971670 ']' 00:17:51.509 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 971670 00:17:51.509 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:17:51.509 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.509 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 971670 00:17:51.509 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.509 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.509 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 971670' 00:17:51.509 killing process with pid 971670 00:17:51.509 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 971670 00:17:51.509 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 971670 00:17:51.769 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:51.769 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:51.769 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:51.769 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:51.769 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:17:51.769 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:51.769 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:17:51.769 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:51.769 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:51.769 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.769 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.769 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.317 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:54.317 00:17:54.317 real 0m15.463s 00:17:54.317 user 0m24.037s 00:17:54.317 sys 0m6.359s 00:17:54.317 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.317 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:54.317 ************************************ 00:17:54.317 END TEST nvmf_nvme_cli 00:17:54.317 ************************************ 00:17:54.317 10:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:54.317 10:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:54.317 10:45:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:54.317 10:45:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.317 10:45:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:54.317 ************************************ 00:17:54.317 START TEST nvmf_vfio_user 00:17:54.317 ************************************ 00:17:54.317 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:54.317 * Looking for test storage... 00:17:54.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:54.317 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:54.317 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:17:54.317 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:54.317 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:54.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.318 --rc genhtml_branch_coverage=1 00:17:54.318 --rc genhtml_function_coverage=1 00:17:54.318 --rc genhtml_legend=1 00:17:54.318 --rc geninfo_all_blocks=1 00:17:54.318 --rc geninfo_unexecuted_blocks=1 00:17:54.318 00:17:54.318 ' 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:54.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.318 --rc genhtml_branch_coverage=1 00:17:54.318 --rc genhtml_function_coverage=1 00:17:54.318 --rc genhtml_legend=1 00:17:54.318 --rc geninfo_all_blocks=1 00:17:54.318 --rc geninfo_unexecuted_blocks=1 00:17:54.318 00:17:54.318 ' 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:54.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.318 --rc genhtml_branch_coverage=1 00:17:54.318 --rc genhtml_function_coverage=1 00:17:54.318 --rc genhtml_legend=1 00:17:54.318 --rc geninfo_all_blocks=1 00:17:54.318 --rc geninfo_unexecuted_blocks=1 00:17:54.318 00:17:54.318 ' 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:54.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.318 --rc genhtml_branch_coverage=1 00:17:54.318 --rc genhtml_function_coverage=1 00:17:54.318 --rc genhtml_legend=1 00:17:54.318 --rc geninfo_all_blocks=1 00:17:54.318 --rc geninfo_unexecuted_blocks=1 00:17:54.318 00:17:54.318 ' 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.318 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:54.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=973316 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 973316' 00:17:54.319 Process pid: 973316 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 973316 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 973316 ']' 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.319 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:54.319 [2024-11-19 10:45:33.268250] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:17:54.319 [2024-11-19 10:45:33.268319] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.319 [2024-11-19 10:45:33.354023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:54.319 [2024-11-19 10:45:33.388464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.319 [2024-11-19 10:45:33.388496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.319 [2024-11-19 10:45:33.388502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.319 [2024-11-19 10:45:33.388507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.319 [2024-11-19 10:45:33.388511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.319 [2024-11-19 10:45:33.390064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.319 [2024-11-19 10:45:33.390265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.319 [2024-11-19 10:45:33.390355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.319 [2024-11-19 10:45:33.390357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:54.888 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.888 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:54.888 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:56.271 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:56.271 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:56.271 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:56.271 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:56.271 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:56.271 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:56.271 Malloc1 00:17:56.271 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:56.532 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:56.793 10:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:57.053 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:57.053 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:57.053 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:57.053 Malloc2 00:17:57.053 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:57.312 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:57.573 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:57.573 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:57.573 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:57.573 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:57.573 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:57.573 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:57.573 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:57.836 [2024-11-19 10:45:36.773736] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:17:57.836 [2024-11-19 10:45:36.773782] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid974013 ] 00:17:57.836 [2024-11-19 10:45:36.813448] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:57.836 [2024-11-19 10:45:36.818422] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:57.836 [2024-11-19 10:45:36.818439] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb2a4922000 00:17:57.836 [2024-11-19 10:45:36.819425] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:57.836 [2024-11-19 10:45:36.820421] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:57.836 [2024-11-19 10:45:36.821434] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:57.836 [2024-11-19 10:45:36.822441] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:57.836 [2024-11-19 10:45:36.823450] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:57.836 [2024-11-19 10:45:36.824461] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:57.836 [2024-11-19 10:45:36.825462] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:57.836 [2024-11-19 10:45:36.826462] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:57.836 [2024-11-19 10:45:36.827476] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:57.836 [2024-11-19 10:45:36.827483] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb2a4917000 00:17:57.836 [2024-11-19 10:45:36.828394] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:57.836 [2024-11-19 10:45:36.842435] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:57.836 [2024-11-19 10:45:36.842459] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:17:57.836 [2024-11-19 10:45:36.844577] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:57.836 [2024-11-19 10:45:36.844609] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:57.836 [2024-11-19 10:45:36.844666] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:17:57.836 [2024-11-19 10:45:36.844678] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:17:57.836 [2024-11-19 10:45:36.844681] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:17:57.836 [2024-11-19 10:45:36.845578] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:57.836 [2024-11-19 10:45:36.845584] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:17:57.836 [2024-11-19 10:45:36.845589] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:17:57.836 [2024-11-19 10:45:36.846581] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:57.836 [2024-11-19 10:45:36.846588] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:17:57.836 [2024-11-19 10:45:36.846593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:57.836 [2024-11-19 10:45:36.847585] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:57.836 [2024-11-19 10:45:36.847591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:57.836 [2024-11-19 10:45:36.848593] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:57.836 [2024-11-19 10:45:36.848600] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:57.836 [2024-11-19 10:45:36.848603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:57.836 [2024-11-19 10:45:36.848608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:57.836 [2024-11-19 10:45:36.848713] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:17:57.836 [2024-11-19 10:45:36.848717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:57.836 [2024-11-19 10:45:36.848721] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:57.836 [2024-11-19 10:45:36.849600] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:57.836 [2024-11-19 10:45:36.850603] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:57.836 [2024-11-19 10:45:36.851608] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:57.836 [2024-11-19 10:45:36.852611] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:57.836 [2024-11-19 10:45:36.852660] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:57.836 [2024-11-19 10:45:36.853623] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:57.836 [2024-11-19 10:45:36.853628] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:57.836 [2024-11-19 10:45:36.853631] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:57.836 [2024-11-19 10:45:36.853646] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:17:57.836 [2024-11-19 10:45:36.853652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:57.836 [2024-11-19 10:45:36.853661] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:57.836 [2024-11-19 10:45:36.853665] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:57.836 [2024-11-19 10:45:36.853668] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.836 [2024-11-19 10:45:36.853678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:57.836 [2024-11-19 10:45:36.853713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:57.836 [2024-11-19 10:45:36.853720] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:17:57.836 [2024-11-19 10:45:36.853723] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:17:57.836 [2024-11-19 10:45:36.853726] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:17:57.836 [2024-11-19 10:45:36.853730] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:57.836 [2024-11-19 10:45:36.853736] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:17:57.836 [2024-11-19 10:45:36.853739] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:17:57.836 [2024-11-19 10:45:36.853743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:17:57.836 [2024-11-19 10:45:36.853750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:57.836 [2024-11-19 10:45:36.853757] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:57.836 [2024-11-19 10:45:36.853770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:57.836 [2024-11-19 10:45:36.853777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.836 [2024-11-19 10:45:36.853783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.836 [2024-11-19 10:45:36.853789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.836 [2024-11-19 10:45:36.853795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.836 [2024-11-19 10:45:36.853798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:57.836 [2024-11-19 10:45:36.853803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:57.836 [2024-11-19 10:45:36.853810] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:57.836 [2024-11-19 10:45:36.853818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:57.836 [2024-11-19 10:45:36.853823] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:17:57.836 [2024-11-19 10:45:36.853827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:57.836 [2024-11-19 10:45:36.853832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:17:57.836 [2024-11-19 10:45:36.853836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:57.837 [2024-11-19 10:45:36.853842] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:57.837 [2024-11-19 10:45:36.853850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:57.837 [2024-11-19 10:45:36.853894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:17:57.837 [2024-11-19 10:45:36.853900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:57.837 [2024-11-19 10:45:36.853905] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:57.837 [2024-11-19 10:45:36.853908] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:57.837 [2024-11-19 10:45:36.853912] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.837 [2024-11-19 10:45:36.853916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:57.837 [2024-11-19 10:45:36.853926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:57.837 [2024-11-19 10:45:36.853932] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:17:57.837 [2024-11-19 10:45:36.853938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:17:57.837 [2024-11-19 10:45:36.853944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:57.837 [2024-11-19 10:45:36.853949] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:57.837 [2024-11-19 10:45:36.853952] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:57.837 [2024-11-19 10:45:36.853954] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.837 [2024-11-19 10:45:36.853958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:57.837 [2024-11-19 10:45:36.853977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:57.837 [2024-11-19 10:45:36.853985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:57.837 [2024-11-19 10:45:36.853991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:57.837 [2024-11-19 10:45:36.853996] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:57.837 [2024-11-19 10:45:36.853999] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:57.837 [2024-11-19 10:45:36.854002] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.837 [2024-11-19 10:45:36.854006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:57.837 [2024-11-19 10:45:36.854014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:57.837 [2024-11-19 10:45:36.854019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:57.837 [2024-11-19 10:45:36.854024] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:57.837 [2024-11-19 10:45:36.854029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:17:57.837 [2024-11-19 10:45:36.854033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:57.837 [2024-11-19 10:45:36.854037] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:57.837 [2024-11-19 10:45:36.854041] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:17:57.837 [2024-11-19 10:45:36.854044] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:57.837 [2024-11-19 10:45:36.854047] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:17:57.837 [2024-11-19 10:45:36.854052] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:17:57.837 [2024-11-19 10:45:36.854065] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:57.837 [2024-11-19 10:45:36.854076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:57.837 [2024-11-19 10:45:36.854084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:57.837 [2024-11-19 10:45:36.854094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:57.837 [2024-11-19 10:45:36.854102] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:57.837 [2024-11-19 10:45:36.854113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:57.837 [2024-11-19 10:45:36.854121] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:57.837 [2024-11-19 10:45:36.854131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:57.837 [2024-11-19 10:45:36.854140] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:57.837 [2024-11-19 10:45:36.854143] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:57.837 [2024-11-19 10:45:36.854146] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:57.837 [2024-11-19 10:45:36.854148] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:57.837 [2024-11-19 10:45:36.854151] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:57.837 [2024-11-19 10:45:36.854155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:57.837 [2024-11-19 10:45:36.854164] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:57.837 [2024-11-19 10:45:36.854167] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:57.837 [2024-11-19 10:45:36.854169] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.837 [2024-11-19 10:45:36.854174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:57.837 [2024-11-19 10:45:36.854179] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:57.837 [2024-11-19 10:45:36.854182] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:57.837 [2024-11-19 10:45:36.854184] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.837 [2024-11-19 10:45:36.854188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:57.837 [2024-11-19 10:45:36.854194] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:57.837 [2024-11-19 10:45:36.854197] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:57.837 [2024-11-19 10:45:36.854199] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.837 [2024-11-19 10:45:36.854203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:57.837 [2024-11-19 10:45:36.854208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:57.837 [2024-11-19 10:45:36.854218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:57.837 [2024-11-19 10:45:36.854225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:57.837 [2024-11-19 10:45:36.854230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:57.837 ===================================================== 00:17:57.837 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:57.837 ===================================================== 00:17:57.837 Controller Capabilities/Features 00:17:57.837 ================================ 00:17:57.837 Vendor ID: 4e58 00:17:57.837 Subsystem Vendor ID: 4e58 00:17:57.837 Serial Number: SPDK1 00:17:57.837 Model Number: SPDK bdev Controller 00:17:57.837 Firmware Version: 25.01 00:17:57.837 Recommended Arb Burst: 6 00:17:57.837 IEEE OUI Identifier: 8d 6b 50 00:17:57.837 Multi-path I/O 00:17:57.837 May have multiple subsystem ports: Yes 00:17:57.837 May have multiple controllers: Yes 00:17:57.837 Associated with SR-IOV VF: No 00:17:57.837 Max Data Transfer Size: 131072 00:17:57.837 Max Number of Namespaces: 32 00:17:57.837 Max Number of I/O Queues: 127 00:17:57.837 NVMe Specification Version (VS): 1.3 00:17:57.837 NVMe Specification Version (Identify): 1.3 00:17:57.837 Maximum Queue Entries: 256 00:17:57.837 Contiguous Queues Required: Yes 00:17:57.837 Arbitration Mechanisms Supported 00:17:57.837 Weighted Round Robin: Not Supported 00:17:57.837 Vendor Specific: Not Supported 00:17:57.837 Reset Timeout: 15000 ms 00:17:57.837 Doorbell Stride: 4 bytes 00:17:57.837 NVM Subsystem Reset: Not Supported 00:17:57.837 Command Sets Supported 00:17:57.837 NVM Command Set: Supported 00:17:57.837 Boot Partition: Not Supported 00:17:57.837 Memory Page Size Minimum: 4096 bytes 00:17:57.837 Memory Page Size Maximum: 4096 bytes 00:17:57.837 Persistent Memory Region: Not Supported 00:17:57.837 Optional Asynchronous Events Supported 00:17:57.837 Namespace Attribute Notices: Supported 00:17:57.837 Firmware Activation Notices: Not Supported 00:17:57.837 ANA Change Notices: Not Supported 00:17:57.837 PLE Aggregate Log Change Notices: Not Supported 00:17:57.837 LBA Status Info Alert Notices: Not Supported 00:17:57.837 EGE Aggregate Log Change Notices: Not Supported 00:17:57.838 Normal NVM Subsystem Shutdown event: Not Supported 00:17:57.838 Zone Descriptor Change Notices: Not Supported 00:17:57.838 Discovery Log Change Notices: Not Supported 00:17:57.838 Controller Attributes 00:17:57.838 128-bit Host Identifier: Supported 00:17:57.838 Non-Operational Permissive Mode: Not Supported 00:17:57.838 NVM Sets: Not Supported 00:17:57.838 Read Recovery Levels: Not Supported 00:17:57.838 Endurance Groups: Not Supported 00:17:57.838 Predictable Latency Mode: Not Supported 00:17:57.838 Traffic Based Keep ALive: Not Supported 00:17:57.838 Namespace Granularity: Not Supported 00:17:57.838 SQ Associations: Not Supported 00:17:57.838 UUID List: Not Supported 00:17:57.838 Multi-Domain Subsystem: Not Supported 00:17:57.838 Fixed Capacity Management: Not Supported 00:17:57.838 Variable Capacity Management: Not Supported 00:17:57.838 Delete Endurance Group: Not Supported 00:17:57.838 Delete NVM Set: Not Supported 00:17:57.838 Extended LBA Formats Supported: Not Supported 00:17:57.838 Flexible Data Placement Supported: Not Supported 00:17:57.838 00:17:57.838 Controller Memory Buffer Support 00:17:57.838 ================================ 00:17:57.838 Supported: No 00:17:57.838 00:17:57.838 Persistent Memory Region Support 00:17:57.838 ================================ 00:17:57.838 Supported: No 00:17:57.838 00:17:57.838 Admin Command Set Attributes 00:17:57.838 ============================ 00:17:57.838 Security Send/Receive: Not Supported 00:17:57.838 Format NVM: Not Supported 00:17:57.838 Firmware Activate/Download: Not Supported 00:17:57.838 Namespace Management: Not Supported 00:17:57.838 Device Self-Test: Not Supported 00:17:57.838 Directives: Not Supported 00:17:57.838 NVMe-MI: Not Supported 00:17:57.838 Virtualization Management: Not Supported 00:17:57.838 Doorbell Buffer Config: Not Supported 00:17:57.838 Get LBA Status Capability: Not Supported 00:17:57.838 Command & Feature Lockdown Capability: Not Supported 00:17:57.838 Abort Command Limit: 4 00:17:57.838 Async Event Request Limit: 4 00:17:57.838 Number of Firmware Slots: N/A 00:17:57.838 Firmware Slot 1 Read-Only: N/A 00:17:57.838 Firmware Activation Without Reset: N/A 00:17:57.838 Multiple Update Detection Support: N/A 00:17:57.838 Firmware Update Granularity: No Information Provided 00:17:57.838 Per-Namespace SMART Log: No 00:17:57.838 Asymmetric Namespace Access Log Page: Not Supported 00:17:57.838 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:57.838 Command Effects Log Page: Supported 00:17:57.838 Get Log Page Extended Data: Supported 00:17:57.838 Telemetry Log Pages: Not Supported 00:17:57.838 Persistent Event Log Pages: Not Supported 00:17:57.838 Supported Log Pages Log Page: May Support 00:17:57.838 Commands Supported & Effects Log Page: Not Supported 00:17:57.838 Feature Identifiers & Effects Log Page:May Support 00:17:57.838 NVMe-MI Commands & Effects Log Page: May Support 00:17:57.838 Data Area 4 for Telemetry Log: Not Supported 00:17:57.838 Error Log Page Entries Supported: 128 00:17:57.838 Keep Alive: Supported 00:17:57.838 Keep Alive Granularity: 10000 ms 00:17:57.838 00:17:57.838 NVM Command Set Attributes 00:17:57.838 ========================== 00:17:57.838 Submission Queue Entry Size 00:17:57.838 Max: 64 00:17:57.838 Min: 64 00:17:57.838 Completion Queue Entry Size 00:17:57.838 Max: 16 00:17:57.838 Min: 16 00:17:57.838 Number of Namespaces: 32 00:17:57.838 Compare Command: Supported 00:17:57.838 Write Uncorrectable Command: Not Supported 00:17:57.838 Dataset Management Command: Supported 00:17:57.838 Write Zeroes Command: Supported 00:17:57.838 Set Features Save Field: Not Supported 00:17:57.838 Reservations: Not Supported 00:17:57.838 Timestamp: Not Supported 00:17:57.838 Copy: Supported 00:17:57.838 Volatile Write Cache: Present 00:17:57.838 Atomic Write Unit (Normal): 1 00:17:57.838 Atomic Write Unit (PFail): 1 00:17:57.838 Atomic Compare & Write Unit: 1 00:17:57.838 Fused Compare & Write: Supported 00:17:57.838 Scatter-Gather List 00:17:57.838 SGL Command Set: Supported (Dword aligned) 00:17:57.838 SGL Keyed: Not Supported 00:17:57.838 SGL Bit Bucket Descriptor: Not Supported 00:17:57.838 SGL Metadata Pointer: Not Supported 00:17:57.838 Oversized SGL: Not Supported 00:17:57.838 SGL Metadata Address: Not Supported 00:17:57.838 SGL Offset: Not Supported 00:17:57.838 Transport SGL Data Block: Not Supported 00:17:57.838 Replay Protected Memory Block: Not Supported 00:17:57.838 00:17:57.838 Firmware Slot Information 00:17:57.838 ========================= 00:17:57.838 Active slot: 1 00:17:57.838 Slot 1 Firmware Revision: 25.01 00:17:57.838 00:17:57.838 00:17:57.838 Commands Supported and Effects 00:17:57.838 ============================== 00:17:57.838 Admin Commands 00:17:57.838 -------------- 00:17:57.838 Get Log Page (02h): Supported 00:17:57.838 Identify (06h): Supported 00:17:57.838 Abort (08h): Supported 00:17:57.838 Set Features (09h): Supported 00:17:57.838 Get Features (0Ah): Supported 00:17:57.838 Asynchronous Event Request (0Ch): Supported 00:17:57.838 Keep Alive (18h): Supported 00:17:57.838 I/O Commands 00:17:57.838 ------------ 00:17:57.838 Flush (00h): Supported LBA-Change 00:17:57.838 Write (01h): Supported LBA-Change 00:17:57.838 Read (02h): Supported 00:17:57.838 Compare (05h): Supported 00:17:57.838 Write Zeroes (08h): Supported LBA-Change 00:17:57.838 Dataset Management (09h): Supported LBA-Change 00:17:57.838 Copy (19h): Supported LBA-Change 00:17:57.838 00:17:57.838 Error Log 00:17:57.838 ========= 00:17:57.838 00:17:57.838 Arbitration 00:17:57.838 =========== 00:17:57.838 Arbitration Burst: 1 00:17:57.838 00:17:57.838 Power Management 00:17:57.838 ================ 00:17:57.838 Number of Power States: 1 00:17:57.838 Current Power State: Power State #0 00:17:57.838 Power State #0: 00:17:57.838 Max Power: 0.00 W 00:17:57.838 Non-Operational State: Operational 00:17:57.838 Entry Latency: Not Reported 00:17:57.838 Exit Latency: Not Reported 00:17:57.838 Relative Read Throughput: 0 00:17:57.838 Relative Read Latency: 0 00:17:57.838 Relative Write Throughput: 0 00:17:57.838 Relative Write Latency: 0 00:17:57.838 Idle Power: Not Reported 00:17:57.838 Active Power: Not Reported 00:17:57.838 Non-Operational Permissive Mode: Not Supported 00:17:57.838 00:17:57.838 Health Information 00:17:57.838 ================== 00:17:57.838 Critical Warnings: 00:17:57.838 Available Spare Space: OK 00:17:57.838 Temperature: OK 00:17:57.838 Device Reliability: OK 00:17:57.838 Read Only: No 00:17:57.838 Volatile Memory Backup: OK 00:17:57.838 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:57.838 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:57.838 Available Spare: 0% 00:17:57.838 Available Sp[2024-11-19 10:45:36.854302] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:57.838 [2024-11-19 10:45:36.854316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:57.838 [2024-11-19 10:45:36.854335] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:17:57.838 [2024-11-19 10:45:36.854342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.838 [2024-11-19 10:45:36.854347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.838 [2024-11-19 10:45:36.854351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.838 [2024-11-19 10:45:36.854355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.838 [2024-11-19 10:45:36.857164] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:57.838 [2024-11-19 10:45:36.857172] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:57.838 [2024-11-19 10:45:36.857649] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:57.838 [2024-11-19 10:45:36.857690] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:17:57.838 [2024-11-19 10:45:36.857694] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:17:57.838 [2024-11-19 10:45:36.858655] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:57.838 [2024-11-19 10:45:36.858663] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:17:57.838 [2024-11-19 10:45:36.858712] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:57.838 [2024-11-19 10:45:36.859678] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:57.838 are Threshold: 0% 00:17:57.838 Life Percentage Used: 0% 00:17:57.838 Data Units Read: 0 00:17:57.839 Data Units Written: 0 00:17:57.839 Host Read Commands: 0 00:17:57.839 Host Write Commands: 0 00:17:57.839 Controller Busy Time: 0 minutes 00:17:57.839 Power Cycles: 0 00:17:57.839 Power On Hours: 0 hours 00:17:57.839 Unsafe Shutdowns: 0 00:17:57.839 Unrecoverable Media Errors: 0 00:17:57.839 Lifetime Error Log Entries: 0 00:17:57.839 Warning Temperature Time: 0 minutes 00:17:57.839 Critical Temperature Time: 0 minutes 00:17:57.839 00:17:57.839 Number of Queues 00:17:57.839 ================ 00:17:57.839 Number of I/O Submission Queues: 127 00:17:57.839 Number of I/O Completion Queues: 127 00:17:57.839 00:17:57.839 Active Namespaces 00:17:57.839 ================= 00:17:57.839 Namespace ID:1 00:17:57.839 Error Recovery Timeout: Unlimited 00:17:57.839 Command Set Identifier: NVM (00h) 00:17:57.839 Deallocate: Supported 00:17:57.839 Deallocated/Unwritten Error: Not Supported 00:17:57.839 Deallocated Read Value: Unknown 00:17:57.839 Deallocate in Write Zeroes: Not Supported 00:17:57.839 Deallocated Guard Field: 0xFFFF 00:17:57.839 Flush: Supported 00:17:57.839 Reservation: Supported 00:17:57.839 Namespace Sharing Capabilities: Multiple Controllers 00:17:57.839 Size (in LBAs): 131072 (0GiB) 00:17:57.839 Capacity (in LBAs): 131072 (0GiB) 00:17:57.839 Utilization (in LBAs): 131072 (0GiB) 00:17:57.839 NGUID: 13729B8E5903489C95D974766A9D9B16 00:17:57.839 UUID: 13729b8e-5903-489c-95d9-74766a9d9b16 00:17:57.839 Thin Provisioning: Not Supported 00:17:57.839 Per-NS Atomic Units: Yes 00:17:57.839 Atomic Boundary Size (Normal): 0 00:17:57.839 Atomic Boundary Size (PFail): 0 00:17:57.839 Atomic Boundary Offset: 0 00:17:57.839 Maximum Single Source Range Length: 65535 00:17:57.839 Maximum Copy Length: 65535 00:17:57.839 Maximum Source Range Count: 1 00:17:57.839 NGUID/EUI64 Never Reused: No 00:17:57.839 Namespace Write Protected: No 00:17:57.839 Number of LBA Formats: 1 00:17:57.839 Current LBA Format: LBA Format #00 00:17:57.839 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:57.839 00:17:57.839 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:58.100 [2024-11-19 10:45:37.051854] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:03.392 Initializing NVMe Controllers 00:18:03.392 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:03.392 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:03.392 Initialization complete. Launching workers. 00:18:03.392 ======================================================== 00:18:03.392 Latency(us) 00:18:03.392 Device Information : IOPS MiB/s Average min max 00:18:03.392 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40032.68 156.38 3197.60 852.39 7770.37 00:18:03.392 ======================================================== 00:18:03.392 Total : 40032.68 156.38 3197.60 852.39 7770.37 00:18:03.392 00:18:03.392 [2024-11-19 10:45:42.072786] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:03.392 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:03.392 [2024-11-19 10:45:42.264632] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:08.679 Initializing NVMe Controllers 00:18:08.679 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:08.679 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:08.679 Initialization complete. Launching workers. 00:18:08.679 ======================================================== 00:18:08.679 Latency(us) 00:18:08.679 Device Information : IOPS MiB/s Average min max 00:18:08.679 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.84 62.70 7974.14 4985.94 9977.01 00:18:08.679 ======================================================== 00:18:08.679 Total : 16050.84 62.70 7974.14 4985.94 9977.01 00:18:08.679 00:18:08.679 [2024-11-19 10:45:47.294237] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:08.679 10:45:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:08.679 [2024-11-19 10:45:47.502119] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:13.968 [2024-11-19 10:45:52.584450] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:13.968 Initializing NVMe Controllers 00:18:13.968 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:13.968 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:13.968 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:13.968 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:13.968 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:13.968 Initialization complete. Launching workers. 00:18:13.968 Starting thread on core 2 00:18:13.968 Starting thread on core 3 00:18:13.968 Starting thread on core 1 00:18:13.968 10:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:13.968 [2024-11-19 10:45:52.829492] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:17.271 [2024-11-19 10:45:55.886796] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:17.271 Initializing NVMe Controllers 00:18:17.271 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:17.271 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:17.271 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:17.271 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:17.271 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:17.271 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:17.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:17.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:17.271 Initialization complete. Launching workers. 00:18:17.271 Starting thread on core 1 with urgent priority queue 00:18:17.271 Starting thread on core 2 with urgent priority queue 00:18:17.271 Starting thread on core 3 with urgent priority queue 00:18:17.271 Starting thread on core 0 with urgent priority queue 00:18:17.271 SPDK bdev Controller (SPDK1 ) core 0: 13198.00 IO/s 7.58 secs/100000 ios 00:18:17.271 SPDK bdev Controller (SPDK1 ) core 1: 8329.67 IO/s 12.01 secs/100000 ios 00:18:17.271 SPDK bdev Controller (SPDK1 ) core 2: 12861.67 IO/s 7.78 secs/100000 ios 00:18:17.271 SPDK bdev Controller (SPDK1 ) core 3: 9276.67 IO/s 10.78 secs/100000 ios 00:18:17.271 ======================================================== 00:18:17.271 00:18:17.271 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:17.271 [2024-11-19 10:45:56.123536] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:17.271 Initializing NVMe Controllers 00:18:17.271 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:17.271 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:17.271 Namespace ID: 1 size: 0GB 00:18:17.271 Initialization complete. 00:18:17.271 INFO: using host memory buffer for IO 00:18:17.271 Hello world! 00:18:17.271 [2024-11-19 10:45:56.157758] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:17.271 10:45:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:17.271 [2024-11-19 10:45:56.395612] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:18.226 Initializing NVMe Controllers 00:18:18.226 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:18.226 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:18.226 Initialization complete. Launching workers. 00:18:18.226 submit (in ns) avg, min, max = 7334.1, 2824.2, 4000481.7 00:18:18.226 complete (in ns) avg, min, max = 14033.6, 1647.5, 3998254.2 00:18:18.226 00:18:18.226 Submit histogram 00:18:18.226 ================ 00:18:18.226 Range in us Cumulative Count 00:18:18.226 2.813 - 2.827: 0.0298% ( 6) 00:18:18.226 2.827 - 2.840: 0.8140% ( 158) 00:18:18.226 2.840 - 2.853: 3.0873% ( 458) 00:18:18.226 2.853 - 2.867: 6.6710% ( 722) 00:18:18.226 2.867 - 2.880: 11.8976% ( 1053) 00:18:18.226 2.880 - 2.893: 17.1787% ( 1064) 00:18:18.226 2.893 - 2.907: 23.4923% ( 1272) 00:18:18.226 2.907 - 2.920: 29.2947% ( 1169) 00:18:18.226 2.920 - 2.933: 35.3005% ( 1210) 00:18:18.226 2.933 - 2.947: 40.1995% ( 987) 00:18:18.226 2.947 - 2.960: 44.5525% ( 877) 00:18:18.226 2.960 - 2.973: 50.4988% ( 1198) 00:18:18.226 2.973 - 2.987: 58.7383% ( 1660) 00:18:18.226 2.987 - 3.000: 68.1590% ( 1898) 00:18:18.226 3.000 - 3.013: 76.9246% ( 1766) 00:18:18.226 3.013 - 3.027: 83.5559% ( 1336) 00:18:18.226 3.027 - 3.040: 89.6064% ( 1219) 00:18:18.226 3.040 - 3.053: 93.9991% ( 885) 00:18:18.226 3.053 - 3.067: 96.7936% ( 563) 00:18:18.226 3.067 - 3.080: 98.1486% ( 273) 00:18:18.226 3.080 - 3.093: 98.8236% ( 136) 00:18:18.226 3.093 - 3.107: 99.1661% ( 69) 00:18:18.226 3.107 - 3.120: 99.3399% ( 35) 00:18:18.226 3.120 - 3.133: 99.4292% ( 18) 00:18:18.226 3.133 - 3.147: 99.4739% ( 9) 00:18:18.226 3.147 - 3.160: 99.4987% ( 5) 00:18:18.226 3.160 - 3.173: 99.5086% ( 2) 00:18:18.226 3.173 - 3.187: 99.5235% ( 3) 00:18:18.226 3.200 - 3.213: 99.5285% ( 1) 00:18:18.226 3.213 - 3.227: 99.5334% ( 1) 00:18:18.226 3.227 - 3.240: 99.5384% ( 1) 00:18:18.226 3.467 - 3.493: 99.5434% ( 1) 00:18:18.226 3.653 - 3.680: 99.5533% ( 2) 00:18:18.226 3.787 - 3.813: 99.5582% ( 1) 00:18:18.226 3.840 - 3.867: 99.5682% ( 2) 00:18:18.226 3.947 - 3.973: 99.5731% ( 1) 00:18:18.226 4.240 - 4.267: 99.5781% ( 1) 00:18:18.226 4.320 - 4.347: 99.5831% ( 1) 00:18:18.226 4.453 - 4.480: 99.5880% ( 1) 00:18:18.226 4.533 - 4.560: 99.5930% ( 1) 00:18:18.226 4.800 - 4.827: 99.6079% ( 3) 00:18:18.226 4.907 - 4.933: 99.6128% ( 1) 00:18:18.226 4.933 - 4.960: 99.6178% ( 1) 00:18:18.226 4.960 - 4.987: 99.6277% ( 2) 00:18:18.226 4.987 - 5.013: 99.6377% ( 2) 00:18:18.226 5.013 - 5.040: 99.6426% ( 1) 00:18:18.226 5.040 - 5.067: 99.6526% ( 2) 00:18:18.226 5.067 - 5.093: 99.6575% ( 1) 00:18:18.226 5.147 - 5.173: 99.6625% ( 1) 00:18:18.226 5.173 - 5.200: 99.6674% ( 1) 00:18:18.226 5.227 - 5.253: 99.6724% ( 1) 00:18:18.226 5.360 - 5.387: 99.6774% ( 1) 00:18:18.226 5.440 - 5.467: 99.6873% ( 2) 00:18:18.226 5.493 - 5.520: 99.6923% ( 1) 00:18:18.226 5.653 - 5.680: 99.6972% ( 1) 00:18:18.226 5.680 - 5.707: 99.7022% ( 1) 00:18:18.226 5.707 - 5.733: 99.7072% ( 1) 00:18:18.226 5.733 - 5.760: 99.7121% ( 1) 00:18:18.226 5.813 - 5.840: 99.7171% ( 1) 00:18:18.226 5.840 - 5.867: 99.7270% ( 2) 00:18:18.226 5.867 - 5.893: 99.7369% ( 2) 00:18:18.226 5.920 - 5.947: 99.7419% ( 1) 00:18:18.226 6.027 - 6.053: 99.7469% ( 1) 00:18:18.226 6.080 - 6.107: 99.7518% ( 1) 00:18:18.226 6.160 - 6.187: 99.7618% ( 2) 00:18:18.227 6.240 - 6.267: 99.7717% ( 2) 00:18:18.227 6.267 - 6.293: 99.7816% ( 2) 00:18:18.227 6.373 - 6.400: 99.7915% ( 2) 00:18:18.227 6.400 - 6.427: 99.7965% ( 1) 00:18:18.227 6.427 - 6.453: 99.8064% ( 2) 00:18:18.227 6.507 - 6.533: 99.8114% ( 1) 00:18:18.227 6.533 - 6.560: 99.8163% ( 1) 00:18:18.227 6.560 - 6.587: 99.8213% ( 1) 00:18:18.227 6.587 - 6.613: 99.8263% ( 1) 00:18:18.227 6.640 - 6.667: 99.8312% ( 1) 00:18:18.227 6.667 - 6.693: 99.8362% ( 1) 00:18:18.227 6.800 - 6.827: 99.8412% ( 1) 00:18:18.227 6.880 - 6.933: 99.8461% ( 1) 00:18:18.227 6.933 - 6.987: 99.8511% ( 1) 00:18:18.227 [2024-11-19 10:45:57.416199] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:18.488 6.987 - 7.040: 99.8561% ( 1) 00:18:18.488 7.040 - 7.093: 99.8610% ( 1) 00:18:18.488 7.093 - 7.147: 99.8660% ( 1) 00:18:18.488 7.200 - 7.253: 99.8709% ( 1) 00:18:18.488 7.253 - 7.307: 99.8759% ( 1) 00:18:18.488 7.413 - 7.467: 99.8809% ( 1) 00:18:18.488 7.680 - 7.733: 99.8858% ( 1) 00:18:18.488 8.373 - 8.427: 99.8908% ( 1) 00:18:18.488 3986.773 - 4014.080: 100.0000% ( 22) 00:18:18.488 00:18:18.488 Complete histogram 00:18:18.488 ================== 00:18:18.488 Range in us Cumulative Count 00:18:18.488 1.647 - 1.653: 0.6403% ( 129) 00:18:18.488 1.653 - 1.660: 0.9580% ( 64) 00:18:18.488 1.660 - 1.667: 1.0175% ( 12) 00:18:18.488 1.667 - 1.673: 1.2161% ( 40) 00:18:18.488 1.673 - 1.680: 1.2458% ( 6) 00:18:18.488 1.680 - 1.687: 1.2607% ( 3) 00:18:18.488 1.687 - 1.693: 1.2707% ( 2) 00:18:18.488 1.693 - 1.700: 1.2856% ( 3) 00:18:18.488 1.700 - 1.707: 1.2905% ( 1) 00:18:18.488 1.707 - 1.720: 40.7108% ( 7942) 00:18:18.488 1.720 - 1.733: 62.0092% ( 4291) 00:18:18.488 1.733 - 1.747: 77.4557% ( 3112) 00:18:18.488 1.747 - 1.760: 83.4218% ( 1202) 00:18:18.488 1.760 - 1.773: 84.1019% ( 137) 00:18:18.488 1.773 - 1.787: 89.2292% ( 1033) 00:18:18.488 1.787 - 1.800: 94.9124% ( 1145) 00:18:18.488 1.800 - 1.813: 97.8309% ( 588) 00:18:18.489 1.813 - 1.827: 99.0966% ( 255) 00:18:18.489 1.827 - 1.840: 99.4689% ( 75) 00:18:18.489 1.840 - 1.853: 99.5285% ( 12) 00:18:18.489 1.893 - 1.907: 99.5334% ( 1) 00:18:18.489 1.907 - 1.920: 99.5384% ( 1) 00:18:18.489 2.013 - 2.027: 99.5434% ( 1) 00:18:18.489 3.440 - 3.467: 99.5483% ( 1) 00:18:18.489 3.707 - 3.733: 99.5533% ( 1) 00:18:18.489 3.947 - 3.973: 99.5582% ( 1) 00:18:18.489 4.133 - 4.160: 99.5632% ( 1) 00:18:18.489 4.480 - 4.507: 99.5731% ( 2) 00:18:18.489 4.667 - 4.693: 99.5781% ( 1) 00:18:18.489 4.720 - 4.747: 99.5831% ( 1) 00:18:18.489 4.773 - 4.800: 99.5880% ( 1) 00:18:18.489 4.800 - 4.827: 99.5930% ( 1) 00:18:18.489 4.960 - 4.987: 99.5980% ( 1) 00:18:18.489 5.067 - 5.093: 99.6029% ( 1) 00:18:18.489 5.093 - 5.120: 99.6079% ( 1) 00:18:18.489 5.227 - 5.253: 99.6128% ( 1) 00:18:18.489 5.253 - 5.280: 99.6178% ( 1) 00:18:18.489 5.360 - 5.387: 99.6228% ( 1) 00:18:18.489 5.547 - 5.573: 99.6277% ( 1) 00:18:18.489 5.627 - 5.653: 99.6327% ( 1) 00:18:18.489 5.653 - 5.680: 99.6377% ( 1) 00:18:18.489 5.680 - 5.707: 99.6426% ( 1) 00:18:18.489 5.813 - 5.840: 99.6476% ( 1) 00:18:18.489 5.867 - 5.893: 99.6526% ( 1) 00:18:18.489 5.893 - 5.920: 99.6575% ( 1) 00:18:18.489 6.000 - 6.027: 99.6625% ( 1) 00:18:18.489 6.080 - 6.107: 99.6724% ( 2) 00:18:18.489 6.240 - 6.267: 99.6774% ( 1) 00:18:18.489 6.400 - 6.427: 99.6823% ( 1) 00:18:18.489 6.480 - 6.507: 99.6873% ( 1) 00:18:18.489 6.987 - 7.040: 99.6923% ( 1) 00:18:18.489 3986.773 - 4014.080: 100.0000% ( 62) 00:18:18.489 00:18:18.489 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:18.489 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:18.489 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:18.489 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:18.489 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:18.489 [ 00:18:18.489 { 00:18:18.489 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:18.489 "subtype": "Discovery", 00:18:18.489 "listen_addresses": [], 00:18:18.489 "allow_any_host": true, 00:18:18.489 "hosts": [] 00:18:18.489 }, 00:18:18.489 { 00:18:18.489 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:18.489 "subtype": "NVMe", 00:18:18.489 "listen_addresses": [ 00:18:18.489 { 00:18:18.489 "trtype": "VFIOUSER", 00:18:18.489 "adrfam": "IPv4", 00:18:18.489 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:18.489 "trsvcid": "0" 00:18:18.489 } 00:18:18.489 ], 00:18:18.489 "allow_any_host": true, 00:18:18.489 "hosts": [], 00:18:18.489 "serial_number": "SPDK1", 00:18:18.489 "model_number": "SPDK bdev Controller", 00:18:18.489 "max_namespaces": 32, 00:18:18.489 "min_cntlid": 1, 00:18:18.489 "max_cntlid": 65519, 00:18:18.489 "namespaces": [ 00:18:18.489 { 00:18:18.489 "nsid": 1, 00:18:18.489 "bdev_name": "Malloc1", 00:18:18.489 "name": "Malloc1", 00:18:18.489 "nguid": "13729B8E5903489C95D974766A9D9B16", 00:18:18.489 "uuid": "13729b8e-5903-489c-95d9-74766a9d9b16" 00:18:18.489 } 00:18:18.489 ] 00:18:18.489 }, 00:18:18.489 { 00:18:18.489 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:18.489 "subtype": "NVMe", 00:18:18.489 "listen_addresses": [ 00:18:18.489 { 00:18:18.489 "trtype": "VFIOUSER", 00:18:18.489 "adrfam": "IPv4", 00:18:18.489 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:18.489 "trsvcid": "0" 00:18:18.489 } 00:18:18.489 ], 00:18:18.489 "allow_any_host": true, 00:18:18.489 "hosts": [], 00:18:18.489 "serial_number": "SPDK2", 00:18:18.489 "model_number": "SPDK bdev Controller", 00:18:18.489 "max_namespaces": 32, 00:18:18.489 "min_cntlid": 1, 00:18:18.489 "max_cntlid": 65519, 00:18:18.489 "namespaces": [ 00:18:18.489 { 00:18:18.489 "nsid": 1, 00:18:18.489 "bdev_name": "Malloc2", 00:18:18.489 "name": "Malloc2", 00:18:18.489 "nguid": "C4FD0574D0CE46B9BB64970D50239BB8", 00:18:18.489 "uuid": "c4fd0574-d0ce-46b9-bb64-970d50239bb8" 00:18:18.489 } 00:18:18.489 ] 00:18:18.489 } 00:18:18.489 ] 00:18:18.489 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:18.489 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:18.489 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=978038 00:18:18.489 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:18.489 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:18.489 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:18.489 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:18.489 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:18.489 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:18.489 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:18.750 [2024-11-19 10:45:57.788531] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:18.750 Malloc3 00:18:18.750 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:19.011 [2024-11-19 10:45:57.990862] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:19.011 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:19.011 Asynchronous Event Request test 00:18:19.011 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:19.011 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:19.011 Registering asynchronous event callbacks... 00:18:19.011 Starting namespace attribute notice tests for all controllers... 00:18:19.011 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:19.011 aer_cb - Changed Namespace 00:18:19.011 Cleaning up... 00:18:19.011 [ 00:18:19.011 { 00:18:19.011 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:19.011 "subtype": "Discovery", 00:18:19.011 "listen_addresses": [], 00:18:19.011 "allow_any_host": true, 00:18:19.011 "hosts": [] 00:18:19.011 }, 00:18:19.011 { 00:18:19.011 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:19.011 "subtype": "NVMe", 00:18:19.011 "listen_addresses": [ 00:18:19.011 { 00:18:19.011 "trtype": "VFIOUSER", 00:18:19.011 "adrfam": "IPv4", 00:18:19.011 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:19.011 "trsvcid": "0" 00:18:19.011 } 00:18:19.011 ], 00:18:19.011 "allow_any_host": true, 00:18:19.011 "hosts": [], 00:18:19.011 "serial_number": "SPDK1", 00:18:19.011 "model_number": "SPDK bdev Controller", 00:18:19.011 "max_namespaces": 32, 00:18:19.011 "min_cntlid": 1, 00:18:19.011 "max_cntlid": 65519, 00:18:19.011 "namespaces": [ 00:18:19.011 { 00:18:19.011 "nsid": 1, 00:18:19.011 "bdev_name": "Malloc1", 00:18:19.011 "name": "Malloc1", 00:18:19.011 "nguid": "13729B8E5903489C95D974766A9D9B16", 00:18:19.011 "uuid": "13729b8e-5903-489c-95d9-74766a9d9b16" 00:18:19.011 }, 00:18:19.011 { 00:18:19.011 "nsid": 2, 00:18:19.011 "bdev_name": "Malloc3", 00:18:19.011 "name": "Malloc3", 00:18:19.011 "nguid": "902290E7D21747F59DCCB047CEA60806", 00:18:19.011 "uuid": "902290e7-d217-47f5-9dcc-b047cea60806" 00:18:19.011 } 00:18:19.011 ] 00:18:19.011 }, 00:18:19.011 { 00:18:19.011 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:19.011 "subtype": "NVMe", 00:18:19.011 "listen_addresses": [ 00:18:19.011 { 00:18:19.011 "trtype": "VFIOUSER", 00:18:19.011 "adrfam": "IPv4", 00:18:19.011 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:19.011 "trsvcid": "0" 00:18:19.011 } 00:18:19.011 ], 00:18:19.011 "allow_any_host": true, 00:18:19.011 "hosts": [], 00:18:19.011 "serial_number": "SPDK2", 00:18:19.011 "model_number": "SPDK bdev Controller", 00:18:19.011 "max_namespaces": 32, 00:18:19.011 "min_cntlid": 1, 00:18:19.011 "max_cntlid": 65519, 00:18:19.011 "namespaces": [ 00:18:19.011 { 00:18:19.011 "nsid": 1, 00:18:19.011 "bdev_name": "Malloc2", 00:18:19.011 "name": "Malloc2", 00:18:19.011 "nguid": "C4FD0574D0CE46B9BB64970D50239BB8", 00:18:19.011 "uuid": "c4fd0574-d0ce-46b9-bb64-970d50239bb8" 00:18:19.011 } 00:18:19.011 ] 00:18:19.011 } 00:18:19.011 ] 00:18:19.011 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 978038 00:18:19.011 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:19.011 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:19.011 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:19.011 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:19.274 [2024-11-19 10:45:58.232264] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:18:19.274 [2024-11-19 10:45:58.232307] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid978275 ] 00:18:19.274 [2024-11-19 10:45:58.273383] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:19.274 [2024-11-19 10:45:58.278323] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:19.274 [2024-11-19 10:45:58.278342] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fda94c1a000 00:18:19.274 [2024-11-19 10:45:58.279322] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:19.274 [2024-11-19 10:45:58.280332] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:19.274 [2024-11-19 10:45:58.281336] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:19.274 [2024-11-19 10:45:58.282346] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:19.274 [2024-11-19 10:45:58.283355] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:19.274 [2024-11-19 10:45:58.284357] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:19.274 [2024-11-19 10:45:58.285360] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:19.274 [2024-11-19 10:45:58.286366] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:19.274 [2024-11-19 10:45:58.287377] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:19.274 [2024-11-19 10:45:58.287388] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fda94c0f000 00:18:19.274 [2024-11-19 10:45:58.288300] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:19.274 [2024-11-19 10:45:58.302446] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:19.274 [2024-11-19 10:45:58.302464] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:19.274 [2024-11-19 10:45:58.304517] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:19.274 [2024-11-19 10:45:58.304552] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:19.274 [2024-11-19 10:45:58.304610] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:19.274 [2024-11-19 10:45:58.304620] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:19.274 [2024-11-19 10:45:58.304623] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:19.274 [2024-11-19 10:45:58.305520] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:19.274 [2024-11-19 10:45:58.305528] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:19.274 [2024-11-19 10:45:58.305534] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:19.274 [2024-11-19 10:45:58.306529] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:19.274 [2024-11-19 10:45:58.306537] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:19.274 [2024-11-19 10:45:58.306542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:19.274 [2024-11-19 10:45:58.307533] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:19.274 [2024-11-19 10:45:58.307539] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:19.274 [2024-11-19 10:45:58.308541] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:19.274 [2024-11-19 10:45:58.308548] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:19.274 [2024-11-19 10:45:58.308552] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:19.274 [2024-11-19 10:45:58.308556] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:19.274 [2024-11-19 10:45:58.308662] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:19.274 [2024-11-19 10:45:58.308666] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:19.274 [2024-11-19 10:45:58.308669] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:19.274 [2024-11-19 10:45:58.309546] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:19.274 [2024-11-19 10:45:58.310551] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:19.274 [2024-11-19 10:45:58.311559] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:19.274 [2024-11-19 10:45:58.312560] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:19.274 [2024-11-19 10:45:58.312593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:19.274 [2024-11-19 10:45:58.313568] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:19.274 [2024-11-19 10:45:58.313575] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:19.274 [2024-11-19 10:45:58.313578] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:19.274 [2024-11-19 10:45:58.313593] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:19.274 [2024-11-19 10:45:58.313599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:19.274 [2024-11-19 10:45:58.313608] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:19.274 [2024-11-19 10:45:58.313612] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:19.274 [2024-11-19 10:45:58.313614] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:19.274 [2024-11-19 10:45:58.313623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:19.274 [2024-11-19 10:45:58.324166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:19.274 [2024-11-19 10:45:58.324175] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:19.274 [2024-11-19 10:45:58.324179] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:19.274 [2024-11-19 10:45:58.324182] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:19.274 [2024-11-19 10:45:58.324185] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:19.275 [2024-11-19 10:45:58.324191] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:19.275 [2024-11-19 10:45:58.324194] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:19.275 [2024-11-19 10:45:58.324197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.324204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.324211] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:19.275 [2024-11-19 10:45:58.332165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:19.275 [2024-11-19 10:45:58.332175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.275 [2024-11-19 10:45:58.332183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.275 [2024-11-19 10:45:58.332190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.275 [2024-11-19 10:45:58.332196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.275 [2024-11-19 10:45:58.332199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.332204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.332211] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:19.275 [2024-11-19 10:45:58.340164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:19.275 [2024-11-19 10:45:58.340171] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:19.275 [2024-11-19 10:45:58.340175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.340180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.340185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.340191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:19.275 [2024-11-19 10:45:58.348165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:19.275 [2024-11-19 10:45:58.348211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.348217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.348222] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:19.275 [2024-11-19 10:45:58.348226] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:19.275 [2024-11-19 10:45:58.348228] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:19.275 [2024-11-19 10:45:58.348233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:19.275 [2024-11-19 10:45:58.356164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:19.275 [2024-11-19 10:45:58.356172] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:19.275 [2024-11-19 10:45:58.356182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.356188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.356193] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:19.275 [2024-11-19 10:45:58.356197] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:19.275 [2024-11-19 10:45:58.356201] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:19.275 [2024-11-19 10:45:58.356205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:19.275 [2024-11-19 10:45:58.364164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:19.275 [2024-11-19 10:45:58.364175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.364181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.364187] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:19.275 [2024-11-19 10:45:58.364190] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:19.275 [2024-11-19 10:45:58.364192] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:19.275 [2024-11-19 10:45:58.364197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:19.275 [2024-11-19 10:45:58.372163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:19.275 [2024-11-19 10:45:58.372170] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.372175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.372181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.372185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.372189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.372192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.372196] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:19.275 [2024-11-19 10:45:58.372199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:19.275 [2024-11-19 10:45:58.372203] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:19.275 [2024-11-19 10:45:58.372215] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:19.275 [2024-11-19 10:45:58.380165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:19.275 [2024-11-19 10:45:58.380176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:19.275 [2024-11-19 10:45:58.388165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:19.275 [2024-11-19 10:45:58.388175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:19.275 [2024-11-19 10:45:58.396165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:19.275 [2024-11-19 10:45:58.396177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:19.275 [2024-11-19 10:45:58.404164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:19.275 [2024-11-19 10:45:58.404176] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:19.275 [2024-11-19 10:45:58.404179] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:19.275 [2024-11-19 10:45:58.404182] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:19.275 [2024-11-19 10:45:58.404184] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:19.275 [2024-11-19 10:45:58.404187] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:19.275 [2024-11-19 10:45:58.404191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:19.275 [2024-11-19 10:45:58.404197] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:19.275 [2024-11-19 10:45:58.404199] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:19.275 [2024-11-19 10:45:58.404202] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:19.275 [2024-11-19 10:45:58.404206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:19.275 [2024-11-19 10:45:58.404211] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:19.275 [2024-11-19 10:45:58.404214] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:19.275 [2024-11-19 10:45:58.404216] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:19.275 [2024-11-19 10:45:58.404221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:19.275 [2024-11-19 10:45:58.404226] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:19.275 [2024-11-19 10:45:58.404229] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:19.275 [2024-11-19 10:45:58.404231] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:19.275 [2024-11-19 10:45:58.404236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:19.275 [2024-11-19 10:45:58.412163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:19.275 [2024-11-19 10:45:58.412173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:19.275 [2024-11-19 10:45:58.412181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:19.275 [2024-11-19 10:45:58.412185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:19.275 ===================================================== 00:18:19.276 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:19.276 ===================================================== 00:18:19.276 Controller Capabilities/Features 00:18:19.276 ================================ 00:18:19.276 Vendor ID: 4e58 00:18:19.276 Subsystem Vendor ID: 4e58 00:18:19.276 Serial Number: SPDK2 00:18:19.276 Model Number: SPDK bdev Controller 00:18:19.276 Firmware Version: 25.01 00:18:19.276 Recommended Arb Burst: 6 00:18:19.276 IEEE OUI Identifier: 8d 6b 50 00:18:19.276 Multi-path I/O 00:18:19.276 May have multiple subsystem ports: Yes 00:18:19.276 May have multiple controllers: Yes 00:18:19.276 Associated with SR-IOV VF: No 00:18:19.276 Max Data Transfer Size: 131072 00:18:19.276 Max Number of Namespaces: 32 00:18:19.276 Max Number of I/O Queues: 127 00:18:19.276 NVMe Specification Version (VS): 1.3 00:18:19.276 NVMe Specification Version (Identify): 1.3 00:18:19.276 Maximum Queue Entries: 256 00:18:19.276 Contiguous Queues Required: Yes 00:18:19.276 Arbitration Mechanisms Supported 00:18:19.276 Weighted Round Robin: Not Supported 00:18:19.276 Vendor Specific: Not Supported 00:18:19.276 Reset Timeout: 15000 ms 00:18:19.276 Doorbell Stride: 4 bytes 00:18:19.276 NVM Subsystem Reset: Not Supported 00:18:19.276 Command Sets Supported 00:18:19.276 NVM Command Set: Supported 00:18:19.276 Boot Partition: Not Supported 00:18:19.276 Memory Page Size Minimum: 4096 bytes 00:18:19.276 Memory Page Size Maximum: 4096 bytes 00:18:19.276 Persistent Memory Region: Not Supported 00:18:19.276 Optional Asynchronous Events Supported 00:18:19.276 Namespace Attribute Notices: Supported 00:18:19.276 Firmware Activation Notices: Not Supported 00:18:19.276 ANA Change Notices: Not Supported 00:18:19.276 PLE Aggregate Log Change Notices: Not Supported 00:18:19.276 LBA Status Info Alert Notices: Not Supported 00:18:19.276 EGE Aggregate Log Change Notices: Not Supported 00:18:19.276 Normal NVM Subsystem Shutdown event: Not Supported 00:18:19.276 Zone Descriptor Change Notices: Not Supported 00:18:19.276 Discovery Log Change Notices: Not Supported 00:18:19.276 Controller Attributes 00:18:19.276 128-bit Host Identifier: Supported 00:18:19.276 Non-Operational Permissive Mode: Not Supported 00:18:19.276 NVM Sets: Not Supported 00:18:19.276 Read Recovery Levels: Not Supported 00:18:19.276 Endurance Groups: Not Supported 00:18:19.276 Predictable Latency Mode: Not Supported 00:18:19.276 Traffic Based Keep ALive: Not Supported 00:18:19.276 Namespace Granularity: Not Supported 00:18:19.276 SQ Associations: Not Supported 00:18:19.276 UUID List: Not Supported 00:18:19.276 Multi-Domain Subsystem: Not Supported 00:18:19.276 Fixed Capacity Management: Not Supported 00:18:19.276 Variable Capacity Management: Not Supported 00:18:19.276 Delete Endurance Group: Not Supported 00:18:19.276 Delete NVM Set: Not Supported 00:18:19.276 Extended LBA Formats Supported: Not Supported 00:18:19.276 Flexible Data Placement Supported: Not Supported 00:18:19.276 00:18:19.276 Controller Memory Buffer Support 00:18:19.276 ================================ 00:18:19.276 Supported: No 00:18:19.276 00:18:19.276 Persistent Memory Region Support 00:18:19.276 ================================ 00:18:19.276 Supported: No 00:18:19.276 00:18:19.276 Admin Command Set Attributes 00:18:19.276 ============================ 00:18:19.276 Security Send/Receive: Not Supported 00:18:19.276 Format NVM: Not Supported 00:18:19.276 Firmware Activate/Download: Not Supported 00:18:19.276 Namespace Management: Not Supported 00:18:19.276 Device Self-Test: Not Supported 00:18:19.276 Directives: Not Supported 00:18:19.276 NVMe-MI: Not Supported 00:18:19.276 Virtualization Management: Not Supported 00:18:19.276 Doorbell Buffer Config: Not Supported 00:18:19.276 Get LBA Status Capability: Not Supported 00:18:19.276 Command & Feature Lockdown Capability: Not Supported 00:18:19.276 Abort Command Limit: 4 00:18:19.276 Async Event Request Limit: 4 00:18:19.276 Number of Firmware Slots: N/A 00:18:19.276 Firmware Slot 1 Read-Only: N/A 00:18:19.276 Firmware Activation Without Reset: N/A 00:18:19.276 Multiple Update Detection Support: N/A 00:18:19.276 Firmware Update Granularity: No Information Provided 00:18:19.276 Per-Namespace SMART Log: No 00:18:19.276 Asymmetric Namespace Access Log Page: Not Supported 00:18:19.276 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:19.276 Command Effects Log Page: Supported 00:18:19.276 Get Log Page Extended Data: Supported 00:18:19.276 Telemetry Log Pages: Not Supported 00:18:19.276 Persistent Event Log Pages: Not Supported 00:18:19.276 Supported Log Pages Log Page: May Support 00:18:19.276 Commands Supported & Effects Log Page: Not Supported 00:18:19.276 Feature Identifiers & Effects Log Page:May Support 00:18:19.276 NVMe-MI Commands & Effects Log Page: May Support 00:18:19.276 Data Area 4 for Telemetry Log: Not Supported 00:18:19.276 Error Log Page Entries Supported: 128 00:18:19.276 Keep Alive: Supported 00:18:19.276 Keep Alive Granularity: 10000 ms 00:18:19.276 00:18:19.276 NVM Command Set Attributes 00:18:19.276 ========================== 00:18:19.276 Submission Queue Entry Size 00:18:19.276 Max: 64 00:18:19.276 Min: 64 00:18:19.276 Completion Queue Entry Size 00:18:19.276 Max: 16 00:18:19.276 Min: 16 00:18:19.276 Number of Namespaces: 32 00:18:19.276 Compare Command: Supported 00:18:19.276 Write Uncorrectable Command: Not Supported 00:18:19.276 Dataset Management Command: Supported 00:18:19.276 Write Zeroes Command: Supported 00:18:19.276 Set Features Save Field: Not Supported 00:18:19.276 Reservations: Not Supported 00:18:19.276 Timestamp: Not Supported 00:18:19.276 Copy: Supported 00:18:19.276 Volatile Write Cache: Present 00:18:19.276 Atomic Write Unit (Normal): 1 00:18:19.276 Atomic Write Unit (PFail): 1 00:18:19.276 Atomic Compare & Write Unit: 1 00:18:19.276 Fused Compare & Write: Supported 00:18:19.276 Scatter-Gather List 00:18:19.276 SGL Command Set: Supported (Dword aligned) 00:18:19.276 SGL Keyed: Not Supported 00:18:19.276 SGL Bit Bucket Descriptor: Not Supported 00:18:19.276 SGL Metadata Pointer: Not Supported 00:18:19.276 Oversized SGL: Not Supported 00:18:19.276 SGL Metadata Address: Not Supported 00:18:19.276 SGL Offset: Not Supported 00:18:19.276 Transport SGL Data Block: Not Supported 00:18:19.276 Replay Protected Memory Block: Not Supported 00:18:19.276 00:18:19.276 Firmware Slot Information 00:18:19.276 ========================= 00:18:19.276 Active slot: 1 00:18:19.276 Slot 1 Firmware Revision: 25.01 00:18:19.276 00:18:19.276 00:18:19.276 Commands Supported and Effects 00:18:19.276 ============================== 00:18:19.276 Admin Commands 00:18:19.276 -------------- 00:18:19.276 Get Log Page (02h): Supported 00:18:19.276 Identify (06h): Supported 00:18:19.276 Abort (08h): Supported 00:18:19.276 Set Features (09h): Supported 00:18:19.276 Get Features (0Ah): Supported 00:18:19.276 Asynchronous Event Request (0Ch): Supported 00:18:19.276 Keep Alive (18h): Supported 00:18:19.276 I/O Commands 00:18:19.276 ------------ 00:18:19.276 Flush (00h): Supported LBA-Change 00:18:19.276 Write (01h): Supported LBA-Change 00:18:19.276 Read (02h): Supported 00:18:19.276 Compare (05h): Supported 00:18:19.276 Write Zeroes (08h): Supported LBA-Change 00:18:19.276 Dataset Management (09h): Supported LBA-Change 00:18:19.276 Copy (19h): Supported LBA-Change 00:18:19.276 00:18:19.276 Error Log 00:18:19.276 ========= 00:18:19.276 00:18:19.276 Arbitration 00:18:19.276 =========== 00:18:19.276 Arbitration Burst: 1 00:18:19.276 00:18:19.276 Power Management 00:18:19.276 ================ 00:18:19.276 Number of Power States: 1 00:18:19.276 Current Power State: Power State #0 00:18:19.276 Power State #0: 00:18:19.276 Max Power: 0.00 W 00:18:19.276 Non-Operational State: Operational 00:18:19.276 Entry Latency: Not Reported 00:18:19.276 Exit Latency: Not Reported 00:18:19.276 Relative Read Throughput: 0 00:18:19.276 Relative Read Latency: 0 00:18:19.276 Relative Write Throughput: 0 00:18:19.276 Relative Write Latency: 0 00:18:19.276 Idle Power: Not Reported 00:18:19.276 Active Power: Not Reported 00:18:19.276 Non-Operational Permissive Mode: Not Supported 00:18:19.276 00:18:19.276 Health Information 00:18:19.276 ================== 00:18:19.277 Critical Warnings: 00:18:19.277 Available Spare Space: OK 00:18:19.277 Temperature: OK 00:18:19.277 Device Reliability: OK 00:18:19.277 Read Only: No 00:18:19.277 Volatile Memory Backup: OK 00:18:19.277 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:19.277 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:19.277 Available Spare: 0% 00:18:19.277 Available Sp[2024-11-19 10:45:58.412257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:19.277 [2024-11-19 10:45:58.420166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:19.277 [2024-11-19 10:45:58.420188] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:19.277 [2024-11-19 10:45:58.420194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.277 [2024-11-19 10:45:58.420199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.277 [2024-11-19 10:45:58.420205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.277 [2024-11-19 10:45:58.420209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.277 [2024-11-19 10:45:58.420245] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:19.277 [2024-11-19 10:45:58.420253] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:19.277 [2024-11-19 10:45:58.421253] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:19.277 [2024-11-19 10:45:58.421288] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:19.277 [2024-11-19 10:45:58.421293] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:19.277 [2024-11-19 10:45:58.422254] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:19.277 [2024-11-19 10:45:58.422263] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:19.277 [2024-11-19 10:45:58.422303] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:19.277 [2024-11-19 10:45:58.423277] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:19.277 are Threshold: 0% 00:18:19.277 Life Percentage Used: 0% 00:18:19.277 Data Units Read: 0 00:18:19.277 Data Units Written: 0 00:18:19.277 Host Read Commands: 0 00:18:19.277 Host Write Commands: 0 00:18:19.277 Controller Busy Time: 0 minutes 00:18:19.277 Power Cycles: 0 00:18:19.277 Power On Hours: 0 hours 00:18:19.277 Unsafe Shutdowns: 0 00:18:19.277 Unrecoverable Media Errors: 0 00:18:19.277 Lifetime Error Log Entries: 0 00:18:19.277 Warning Temperature Time: 0 minutes 00:18:19.277 Critical Temperature Time: 0 minutes 00:18:19.277 00:18:19.277 Number of Queues 00:18:19.277 ================ 00:18:19.277 Number of I/O Submission Queues: 127 00:18:19.277 Number of I/O Completion Queues: 127 00:18:19.277 00:18:19.277 Active Namespaces 00:18:19.277 ================= 00:18:19.277 Namespace ID:1 00:18:19.277 Error Recovery Timeout: Unlimited 00:18:19.277 Command Set Identifier: NVM (00h) 00:18:19.277 Deallocate: Supported 00:18:19.277 Deallocated/Unwritten Error: Not Supported 00:18:19.277 Deallocated Read Value: Unknown 00:18:19.277 Deallocate in Write Zeroes: Not Supported 00:18:19.277 Deallocated Guard Field: 0xFFFF 00:18:19.277 Flush: Supported 00:18:19.277 Reservation: Supported 00:18:19.277 Namespace Sharing Capabilities: Multiple Controllers 00:18:19.277 Size (in LBAs): 131072 (0GiB) 00:18:19.277 Capacity (in LBAs): 131072 (0GiB) 00:18:19.277 Utilization (in LBAs): 131072 (0GiB) 00:18:19.277 NGUID: C4FD0574D0CE46B9BB64970D50239BB8 00:18:19.277 UUID: c4fd0574-d0ce-46b9-bb64-970d50239bb8 00:18:19.277 Thin Provisioning: Not Supported 00:18:19.277 Per-NS Atomic Units: Yes 00:18:19.277 Atomic Boundary Size (Normal): 0 00:18:19.277 Atomic Boundary Size (PFail): 0 00:18:19.277 Atomic Boundary Offset: 0 00:18:19.277 Maximum Single Source Range Length: 65535 00:18:19.277 Maximum Copy Length: 65535 00:18:19.277 Maximum Source Range Count: 1 00:18:19.277 NGUID/EUI64 Never Reused: No 00:18:19.277 Namespace Write Protected: No 00:18:19.277 Number of LBA Formats: 1 00:18:19.277 Current LBA Format: LBA Format #00 00:18:19.277 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:19.277 00:18:19.277 10:45:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:19.538 [2024-11-19 10:45:58.610213] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:24.821 Initializing NVMe Controllers 00:18:24.821 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:24.821 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:24.821 Initialization complete. Launching workers. 00:18:24.821 ======================================================== 00:18:24.821 Latency(us) 00:18:24.821 Device Information : IOPS MiB/s Average min max 00:18:24.821 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40023.80 156.34 3198.29 843.12 7763.61 00:18:24.821 ======================================================== 00:18:24.821 Total : 40023.80 156.34 3198.29 843.12 7763.61 00:18:24.821 00:18:24.821 [2024-11-19 10:46:03.714357] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:24.821 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:24.821 [2024-11-19 10:46:03.904956] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:30.365 Initializing NVMe Controllers 00:18:30.365 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:30.365 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:30.365 Initialization complete. Launching workers. 00:18:30.365 ======================================================== 00:18:30.365 Latency(us) 00:18:30.366 Device Information : IOPS MiB/s Average min max 00:18:30.366 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40075.80 156.55 3196.61 842.23 7773.71 00:18:30.366 ======================================================== 00:18:30.366 Total : 40075.80 156.55 3196.61 842.23 7773.71 00:18:30.366 00:18:30.366 [2024-11-19 10:46:08.925058] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:30.366 10:46:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:30.366 [2024-11-19 10:46:09.139545] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:35.649 [2024-11-19 10:46:14.272240] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:35.649 Initializing NVMe Controllers 00:18:35.649 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:35.649 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:35.649 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:35.649 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:35.649 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:35.649 Initialization complete. Launching workers. 00:18:35.649 Starting thread on core 2 00:18:35.649 Starting thread on core 3 00:18:35.649 Starting thread on core 1 00:18:35.649 10:46:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:35.649 [2024-11-19 10:46:14.511317] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:38.948 [2024-11-19 10:46:18.140289] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:39.209 Initializing NVMe Controllers 00:18:39.209 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:39.209 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:39.209 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:39.209 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:39.209 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:39.209 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:39.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:39.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:39.209 Initialization complete. Launching workers. 00:18:39.209 Starting thread on core 1 with urgent priority queue 00:18:39.209 Starting thread on core 2 with urgent priority queue 00:18:39.209 Starting thread on core 3 with urgent priority queue 00:18:39.209 Starting thread on core 0 with urgent priority queue 00:18:39.209 SPDK bdev Controller (SPDK2 ) core 0: 4613.67 IO/s 21.67 secs/100000 ios 00:18:39.209 SPDK bdev Controller (SPDK2 ) core 1: 3145.00 IO/s 31.80 secs/100000 ios 00:18:39.209 SPDK bdev Controller (SPDK2 ) core 2: 4729.33 IO/s 21.14 secs/100000 ios 00:18:39.209 SPDK bdev Controller (SPDK2 ) core 3: 5518.00 IO/s 18.12 secs/100000 ios 00:18:39.209 ======================================================== 00:18:39.209 00:18:39.209 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:39.209 [2024-11-19 10:46:18.380276] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:39.209 Initializing NVMe Controllers 00:18:39.209 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:39.209 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:39.209 Namespace ID: 1 size: 0GB 00:18:39.209 Initialization complete. 00:18:39.209 INFO: using host memory buffer for IO 00:18:39.209 Hello world! 00:18:39.209 [2024-11-19 10:46:18.392348] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:39.470 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:39.470 [2024-11-19 10:46:18.634909] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:40.851 Initializing NVMe Controllers 00:18:40.851 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:40.851 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:40.851 Initialization complete. Launching workers. 00:18:40.851 submit (in ns) avg, min, max = 6122.6, 2839.2, 3999601.7 00:18:40.851 complete (in ns) avg, min, max = 17882.5, 1625.0, 3998693.3 00:18:40.851 00:18:40.851 Submit histogram 00:18:40.851 ================ 00:18:40.851 Range in us Cumulative Count 00:18:40.851 2.827 - 2.840: 0.0049% ( 1) 00:18:40.851 2.840 - 2.853: 0.6575% ( 133) 00:18:40.851 2.853 - 2.867: 2.1198% ( 298) 00:18:40.851 2.867 - 2.880: 5.9571% ( 782) 00:18:40.851 2.880 - 2.893: 10.4421% ( 914) 00:18:40.851 2.893 - 2.907: 16.7869% ( 1293) 00:18:40.851 2.907 - 2.920: 22.8274% ( 1231) 00:18:40.852 2.920 - 2.933: 29.4568% ( 1351) 00:18:40.852 2.933 - 2.947: 34.6582% ( 1060) 00:18:40.852 2.947 - 2.960: 38.7016% ( 824) 00:18:40.852 2.960 - 2.973: 43.0983% ( 896) 00:18:40.852 2.973 - 2.987: 48.6727% ( 1136) 00:18:40.852 2.987 - 3.000: 56.4699% ( 1589) 00:18:40.852 3.000 - 3.013: 66.4753% ( 2039) 00:18:40.852 3.013 - 3.027: 75.3815% ( 1815) 00:18:40.852 3.027 - 3.040: 82.8451% ( 1521) 00:18:40.852 3.040 - 3.053: 89.0377% ( 1262) 00:18:40.852 3.053 - 3.067: 93.9104% ( 993) 00:18:40.852 3.067 - 3.080: 96.9380% ( 617) 00:18:40.852 3.080 - 3.093: 98.5132% ( 321) 00:18:40.852 3.093 - 3.107: 99.1462% ( 129) 00:18:40.852 3.107 - 3.120: 99.3915% ( 50) 00:18:40.852 3.120 - 3.133: 99.5093% ( 24) 00:18:40.852 3.133 - 3.147: 99.5584% ( 10) 00:18:40.852 3.147 - 3.160: 99.5829% ( 5) 00:18:40.852 3.213 - 3.227: 99.5878% ( 1) 00:18:40.852 3.267 - 3.280: 99.5927% ( 1) 00:18:40.852 3.320 - 3.333: 99.5976% ( 1) 00:18:40.852 3.573 - 3.600: 99.6025% ( 1) 00:18:40.852 3.600 - 3.627: 99.6074% ( 1) 00:18:40.852 3.653 - 3.680: 99.6123% ( 1) 00:18:40.852 3.787 - 3.813: 99.6222% ( 2) 00:18:40.852 3.867 - 3.893: 99.6271% ( 1) 00:18:40.852 4.027 - 4.053: 99.6369% ( 2) 00:18:40.852 4.133 - 4.160: 99.6418% ( 1) 00:18:40.852 4.187 - 4.213: 99.6467% ( 1) 00:18:40.852 4.533 - 4.560: 99.6565% ( 2) 00:18:40.852 4.587 - 4.613: 99.6614% ( 1) 00:18:40.852 4.613 - 4.640: 99.6663% ( 1) 00:18:40.852 4.667 - 4.693: 99.6712% ( 1) 00:18:40.852 4.720 - 4.747: 99.6761% ( 1) 00:18:40.852 4.800 - 4.827: 99.6810% ( 1) 00:18:40.852 4.853 - 4.880: 99.6860% ( 1) 00:18:40.852 4.907 - 4.933: 99.6909% ( 1) 00:18:40.852 4.933 - 4.960: 99.6958% ( 1) 00:18:40.852 4.987 - 5.013: 99.7056% ( 2) 00:18:40.852 5.013 - 5.040: 99.7105% ( 1) 00:18:40.852 5.040 - 5.067: 99.7203% ( 2) 00:18:40.852 5.067 - 5.093: 99.7252% ( 1) 00:18:40.852 5.093 - 5.120: 99.7301% ( 1) 00:18:40.852 5.147 - 5.173: 99.7350% ( 1) 00:18:40.852 5.173 - 5.200: 99.7399% ( 1) 00:18:40.852 5.280 - 5.307: 99.7448% ( 1) 00:18:40.852 5.520 - 5.547: 99.7497% ( 1) 00:18:40.852 5.627 - 5.653: 99.7546% ( 1) 00:18:40.852 5.707 - 5.733: 99.7645% ( 2) 00:18:40.852 5.867 - 5.893: 99.7694% ( 1) 00:18:40.852 5.893 - 5.920: 99.7792% ( 2) 00:18:40.852 5.973 - 6.000: 99.7841% ( 1) 00:18:40.852 6.000 - 6.027: 99.7890% ( 1) 00:18:40.852 6.107 - 6.133: 99.7939% ( 1) 00:18:40.852 6.240 - 6.267: 99.7988% ( 1) 00:18:40.852 6.267 - 6.293: 99.8037% ( 1) 00:18:40.852 6.320 - 6.347: 99.8135% ( 2) 00:18:40.852 6.347 - 6.373: 99.8184% ( 1) 00:18:40.852 6.373 - 6.400: 99.8233% ( 1) 00:18:40.852 6.480 - 6.507: 99.8283% ( 1) 00:18:40.852 6.533 - 6.560: 99.8430% ( 3) 00:18:40.852 6.560 - 6.587: 99.8479% ( 1) 00:18:40.852 6.667 - 6.693: 99.8528% ( 1) 00:18:40.852 6.773 - 6.800: 99.8577% ( 1) 00:18:40.852 6.800 - 6.827: 99.8626% ( 1) 00:18:40.852 6.827 - 6.880: 99.8675% ( 1) 00:18:40.852 6.880 - 6.933: 99.8724% ( 1) 00:18:40.852 6.933 - 6.987: 99.8773% ( 1) 00:18:40.852 7.093 - 7.147: 99.8822% ( 1) 00:18:40.852 7.253 - 7.307: 99.8871% ( 1) 00:18:40.852 7.307 - 7.360: 99.8970% ( 2) 00:18:40.852 7.413 - 7.467: 99.9019% ( 1) 00:18:40.852 [2024-11-19 10:46:19.732706] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:40.852 8.053 - 8.107: 99.9068% ( 1) 00:18:40.852 8.213 - 8.267: 99.9117% ( 1) 00:18:40.852 9.120 - 9.173: 99.9215% ( 2) 00:18:40.852 3986.773 - 4014.080: 100.0000% ( 16) 00:18:40.852 00:18:40.852 Complete histogram 00:18:40.852 ================== 00:18:40.852 Range in us Cumulative Count 00:18:40.852 1.620 - 1.627: 0.0049% ( 1) 00:18:40.852 1.627 - 1.633: 0.0147% ( 2) 00:18:40.852 1.633 - 1.640: 0.5447% ( 108) 00:18:40.852 1.640 - 1.647: 0.8391% ( 60) 00:18:40.852 1.647 - 1.653: 0.9078% ( 14) 00:18:40.852 1.653 - 1.660: 1.1286% ( 45) 00:18:40.852 1.660 - 1.667: 1.1924% ( 13) 00:18:40.852 1.667 - 1.673: 1.2317% ( 8) 00:18:40.852 1.673 - 1.680: 1.2464% ( 3) 00:18:40.852 1.680 - 1.687: 1.2758% ( 6) 00:18:40.852 1.687 - 1.693: 35.7672% ( 7029) 00:18:40.852 1.693 - 1.700: 48.4960% ( 2594) 00:18:40.852 1.700 - 1.707: 54.5905% ( 1242) 00:18:40.852 1.707 - 1.720: 75.3374% ( 4228) 00:18:40.852 1.720 - 1.733: 82.0943% ( 1377) 00:18:40.852 1.733 - 1.747: 83.4536% ( 277) 00:18:40.852 1.747 - 1.760: 87.2810% ( 780) 00:18:40.852 1.760 - 1.773: 92.7720% ( 1119) 00:18:40.852 1.773 - 1.787: 96.6191% ( 784) 00:18:40.852 1.787 - 1.800: 98.5770% ( 399) 00:18:40.852 1.800 - 1.813: 99.2590% ( 139) 00:18:40.852 1.813 - 1.827: 99.3768% ( 24) 00:18:40.852 1.827 - 1.840: 99.4013% ( 5) 00:18:40.852 2.200 - 2.213: 99.4063% ( 1) 00:18:40.852 3.280 - 3.293: 99.4112% ( 1) 00:18:40.852 3.413 - 3.440: 99.4161% ( 1) 00:18:40.852 3.653 - 3.680: 99.4210% ( 1) 00:18:40.852 3.733 - 3.760: 99.4259% ( 1) 00:18:40.852 3.840 - 3.867: 99.4308% ( 1) 00:18:40.852 3.973 - 4.000: 99.4357% ( 1) 00:18:40.852 4.080 - 4.107: 99.4406% ( 1) 00:18:40.852 4.507 - 4.533: 99.4455% ( 1) 00:18:40.852 4.613 - 4.640: 99.4504% ( 1) 00:18:40.852 4.693 - 4.720: 99.4553% ( 1) 00:18:40.852 4.720 - 4.747: 99.4602% ( 1) 00:18:40.852 4.773 - 4.800: 99.4651% ( 1) 00:18:40.852 4.827 - 4.853: 99.4700% ( 1) 00:18:40.852 4.880 - 4.907: 99.4749% ( 1) 00:18:40.852 4.907 - 4.933: 99.4799% ( 1) 00:18:40.852 5.040 - 5.067: 99.4848% ( 1) 00:18:40.852 5.067 - 5.093: 99.4897% ( 1) 00:18:40.852 5.093 - 5.120: 99.4946% ( 1) 00:18:40.852 5.120 - 5.147: 99.5044% ( 2) 00:18:40.852 5.147 - 5.173: 99.5142% ( 2) 00:18:40.852 5.200 - 5.227: 99.5191% ( 1) 00:18:40.852 5.253 - 5.280: 99.5240% ( 1) 00:18:40.852 5.307 - 5.333: 99.5289% ( 1) 00:18:40.852 5.333 - 5.360: 99.5338% ( 1) 00:18:40.852 5.413 - 5.440: 99.5387% ( 1) 00:18:40.852 5.467 - 5.493: 99.5436% ( 1) 00:18:40.852 5.547 - 5.573: 99.5486% ( 1) 00:18:40.852 5.627 - 5.653: 99.5535% ( 1) 00:18:40.852 5.653 - 5.680: 99.5584% ( 1) 00:18:40.852 5.787 - 5.813: 99.5633% ( 1) 00:18:40.852 5.893 - 5.920: 99.5682% ( 1) 00:18:40.852 6.293 - 6.320: 99.5731% ( 1) 00:18:40.852 6.427 - 6.453: 99.5780% ( 1) 00:18:40.852 6.720 - 6.747: 99.5829% ( 1) 00:18:40.852 11.467 - 11.520: 99.5878% ( 1) 00:18:40.853 11.787 - 11.840: 99.5927% ( 1) 00:18:40.853 2061.653 - 2075.307: 99.5976% ( 1) 00:18:40.853 3795.627 - 3822.933: 99.6025% ( 1) 00:18:40.853 3986.773 - 4014.080: 100.0000% ( 81) 00:18:40.853 00:18:40.853 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:40.853 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:40.853 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:40.853 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:40.853 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:40.853 [ 00:18:40.853 { 00:18:40.853 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:40.853 "subtype": "Discovery", 00:18:40.853 "listen_addresses": [], 00:18:40.853 "allow_any_host": true, 00:18:40.853 "hosts": [] 00:18:40.853 }, 00:18:40.853 { 00:18:40.853 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:40.853 "subtype": "NVMe", 00:18:40.853 "listen_addresses": [ 00:18:40.853 { 00:18:40.853 "trtype": "VFIOUSER", 00:18:40.853 "adrfam": "IPv4", 00:18:40.853 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:40.853 "trsvcid": "0" 00:18:40.853 } 00:18:40.853 ], 00:18:40.853 "allow_any_host": true, 00:18:40.853 "hosts": [], 00:18:40.853 "serial_number": "SPDK1", 00:18:40.853 "model_number": "SPDK bdev Controller", 00:18:40.853 "max_namespaces": 32, 00:18:40.853 "min_cntlid": 1, 00:18:40.853 "max_cntlid": 65519, 00:18:40.853 "namespaces": [ 00:18:40.853 { 00:18:40.853 "nsid": 1, 00:18:40.853 "bdev_name": "Malloc1", 00:18:40.853 "name": "Malloc1", 00:18:40.853 "nguid": "13729B8E5903489C95D974766A9D9B16", 00:18:40.853 "uuid": "13729b8e-5903-489c-95d9-74766a9d9b16" 00:18:40.853 }, 00:18:40.853 { 00:18:40.853 "nsid": 2, 00:18:40.853 "bdev_name": "Malloc3", 00:18:40.853 "name": "Malloc3", 00:18:40.853 "nguid": "902290E7D21747F59DCCB047CEA60806", 00:18:40.853 "uuid": "902290e7-d217-47f5-9dcc-b047cea60806" 00:18:40.853 } 00:18:40.853 ] 00:18:40.853 }, 00:18:40.853 { 00:18:40.853 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:40.853 "subtype": "NVMe", 00:18:40.853 "listen_addresses": [ 00:18:40.853 { 00:18:40.853 "trtype": "VFIOUSER", 00:18:40.853 "adrfam": "IPv4", 00:18:40.853 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:40.853 "trsvcid": "0" 00:18:40.853 } 00:18:40.853 ], 00:18:40.853 "allow_any_host": true, 00:18:40.853 "hosts": [], 00:18:40.853 "serial_number": "SPDK2", 00:18:40.853 "model_number": "SPDK bdev Controller", 00:18:40.853 "max_namespaces": 32, 00:18:40.853 "min_cntlid": 1, 00:18:40.853 "max_cntlid": 65519, 00:18:40.853 "namespaces": [ 00:18:40.853 { 00:18:40.853 "nsid": 1, 00:18:40.853 "bdev_name": "Malloc2", 00:18:40.853 "name": "Malloc2", 00:18:40.853 "nguid": "C4FD0574D0CE46B9BB64970D50239BB8", 00:18:40.853 "uuid": "c4fd0574-d0ce-46b9-bb64-970d50239bb8" 00:18:40.853 } 00:18:40.853 ] 00:18:40.853 } 00:18:40.853 ] 00:18:40.853 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:40.853 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=982411 00:18:40.853 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:40.853 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:40.853 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:40.853 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:40.853 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:40.853 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:40.853 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:40.853 10:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:41.114 [2024-11-19 10:46:20.113676] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:41.114 Malloc4 00:18:41.114 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:41.114 [2024-11-19 10:46:20.301931] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:41.375 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:41.375 Asynchronous Event Request test 00:18:41.375 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:41.375 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:41.375 Registering asynchronous event callbacks... 00:18:41.375 Starting namespace attribute notice tests for all controllers... 00:18:41.375 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:41.375 aer_cb - Changed Namespace 00:18:41.375 Cleaning up... 00:18:41.375 [ 00:18:41.375 { 00:18:41.375 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:41.375 "subtype": "Discovery", 00:18:41.375 "listen_addresses": [], 00:18:41.375 "allow_any_host": true, 00:18:41.375 "hosts": [] 00:18:41.375 }, 00:18:41.375 { 00:18:41.375 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:41.375 "subtype": "NVMe", 00:18:41.375 "listen_addresses": [ 00:18:41.375 { 00:18:41.375 "trtype": "VFIOUSER", 00:18:41.375 "adrfam": "IPv4", 00:18:41.375 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:41.375 "trsvcid": "0" 00:18:41.375 } 00:18:41.375 ], 00:18:41.375 "allow_any_host": true, 00:18:41.375 "hosts": [], 00:18:41.375 "serial_number": "SPDK1", 00:18:41.375 "model_number": "SPDK bdev Controller", 00:18:41.375 "max_namespaces": 32, 00:18:41.375 "min_cntlid": 1, 00:18:41.375 "max_cntlid": 65519, 00:18:41.375 "namespaces": [ 00:18:41.375 { 00:18:41.375 "nsid": 1, 00:18:41.375 "bdev_name": "Malloc1", 00:18:41.375 "name": "Malloc1", 00:18:41.375 "nguid": "13729B8E5903489C95D974766A9D9B16", 00:18:41.375 "uuid": "13729b8e-5903-489c-95d9-74766a9d9b16" 00:18:41.375 }, 00:18:41.375 { 00:18:41.375 "nsid": 2, 00:18:41.375 "bdev_name": "Malloc3", 00:18:41.375 "name": "Malloc3", 00:18:41.375 "nguid": "902290E7D21747F59DCCB047CEA60806", 00:18:41.375 "uuid": "902290e7-d217-47f5-9dcc-b047cea60806" 00:18:41.375 } 00:18:41.375 ] 00:18:41.375 }, 00:18:41.375 { 00:18:41.375 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:41.375 "subtype": "NVMe", 00:18:41.375 "listen_addresses": [ 00:18:41.375 { 00:18:41.375 "trtype": "VFIOUSER", 00:18:41.375 "adrfam": "IPv4", 00:18:41.375 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:41.375 "trsvcid": "0" 00:18:41.375 } 00:18:41.375 ], 00:18:41.375 "allow_any_host": true, 00:18:41.375 "hosts": [], 00:18:41.375 "serial_number": "SPDK2", 00:18:41.375 "model_number": "SPDK bdev Controller", 00:18:41.375 "max_namespaces": 32, 00:18:41.375 "min_cntlid": 1, 00:18:41.375 "max_cntlid": 65519, 00:18:41.375 "namespaces": [ 00:18:41.375 { 00:18:41.376 "nsid": 1, 00:18:41.376 "bdev_name": "Malloc2", 00:18:41.376 "name": "Malloc2", 00:18:41.376 "nguid": "C4FD0574D0CE46B9BB64970D50239BB8", 00:18:41.376 "uuid": "c4fd0574-d0ce-46b9-bb64-970d50239bb8" 00:18:41.376 }, 00:18:41.376 { 00:18:41.376 "nsid": 2, 00:18:41.376 "bdev_name": "Malloc4", 00:18:41.376 "name": "Malloc4", 00:18:41.376 "nguid": "6576EB73991C45A5923ADF65A6BAB9B4", 00:18:41.376 "uuid": "6576eb73-991c-45a5-923a-df65a6bab9b4" 00:18:41.376 } 00:18:41.376 ] 00:18:41.376 } 00:18:41.376 ] 00:18:41.376 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 982411 00:18:41.376 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:41.376 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 973316 00:18:41.376 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 973316 ']' 00:18:41.376 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 973316 00:18:41.376 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:41.376 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.376 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 973316 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 973316' 00:18:41.636 killing process with pid 973316 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 973316 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 973316 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=982698 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 982698' 00:18:41.636 Process pid: 982698 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 982698 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 982698 ']' 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.636 10:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:41.636 [2024-11-19 10:46:20.782935] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:41.636 [2024-11-19 10:46:20.783882] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:18:41.636 [2024-11-19 10:46:20.783924] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.897 [2024-11-19 10:46:20.870951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:41.897 [2024-11-19 10:46:20.904848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.897 [2024-11-19 10:46:20.904883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.897 [2024-11-19 10:46:20.904889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.897 [2024-11-19 10:46:20.904894] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.897 [2024-11-19 10:46:20.904898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.897 [2024-11-19 10:46:20.906193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.897 [2024-11-19 10:46:20.906286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.897 [2024-11-19 10:46:20.906436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.897 [2024-11-19 10:46:20.906438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:41.897 [2024-11-19 10:46:20.958846] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:41.897 [2024-11-19 10:46:20.959643] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:41.897 [2024-11-19 10:46:20.960623] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:41.897 [2024-11-19 10:46:20.960987] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:41.897 [2024-11-19 10:46:20.961023] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:42.467 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.467 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:42.467 10:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:43.407 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:43.668 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:43.668 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:43.668 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:43.668 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:43.668 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:43.928 Malloc1 00:18:43.928 10:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:44.188 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:44.188 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:44.449 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:44.449 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:44.449 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:44.708 Malloc2 00:18:44.708 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:44.968 10:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:44.968 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:45.228 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:45.228 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 982698 00:18:45.228 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 982698 ']' 00:18:45.228 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 982698 00:18:45.228 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:45.228 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.228 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 982698 00:18:45.228 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:45.228 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:45.228 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 982698' 00:18:45.228 killing process with pid 982698 00:18:45.228 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 982698 00:18:45.228 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 982698 00:18:45.488 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:45.488 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:45.488 00:18:45.488 real 0m51.521s 00:18:45.488 user 3m17.643s 00:18:45.488 sys 0m2.635s 00:18:45.488 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.488 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:45.488 ************************************ 00:18:45.488 END TEST nvmf_vfio_user 00:18:45.488 ************************************ 00:18:45.488 10:46:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:45.488 10:46:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:45.488 10:46:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.488 10:46:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:45.488 ************************************ 00:18:45.488 START TEST nvmf_vfio_user_nvme_compliance 00:18:45.488 ************************************ 00:18:45.488 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:45.488 * Looking for test storage... 00:18:45.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:45.488 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:45.488 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:18:45.488 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.749 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:45.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.750 --rc genhtml_branch_coverage=1 00:18:45.750 --rc genhtml_function_coverage=1 00:18:45.750 --rc genhtml_legend=1 00:18:45.750 --rc geninfo_all_blocks=1 00:18:45.750 --rc geninfo_unexecuted_blocks=1 00:18:45.750 00:18:45.750 ' 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:45.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.750 --rc genhtml_branch_coverage=1 00:18:45.750 --rc genhtml_function_coverage=1 00:18:45.750 --rc genhtml_legend=1 00:18:45.750 --rc geninfo_all_blocks=1 00:18:45.750 --rc geninfo_unexecuted_blocks=1 00:18:45.750 00:18:45.750 ' 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:45.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.750 --rc genhtml_branch_coverage=1 00:18:45.750 --rc genhtml_function_coverage=1 00:18:45.750 --rc genhtml_legend=1 00:18:45.750 --rc geninfo_all_blocks=1 00:18:45.750 --rc geninfo_unexecuted_blocks=1 00:18:45.750 00:18:45.750 ' 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:45.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.750 --rc genhtml_branch_coverage=1 00:18:45.750 --rc genhtml_function_coverage=1 00:18:45.750 --rc genhtml_legend=1 00:18:45.750 --rc geninfo_all_blocks=1 00:18:45.750 --rc geninfo_unexecuted_blocks=1 00:18:45.750 00:18:45.750 ' 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:45.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=983506 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 983506' 00:18:45.750 Process pid: 983506 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 983506 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 983506 ']' 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.750 10:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:45.750 [2024-11-19 10:46:24.856972] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:18:45.751 [2024-11-19 10:46:24.857025] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.751 [2024-11-19 10:46:24.940015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:46.011 [2024-11-19 10:46:24.971762] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.011 [2024-11-19 10:46:24.971796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.011 [2024-11-19 10:46:24.971801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.011 [2024-11-19 10:46:24.971806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.011 [2024-11-19 10:46:24.971810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.011 [2024-11-19 10:46:24.972930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.011 [2024-11-19 10:46:24.973080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.011 [2024-11-19 10:46:24.973083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.581 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.581 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:18:46.581 10:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:47.522 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:47.522 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:47.522 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:47.522 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.522 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:47.522 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.522 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:47.522 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:47.522 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.522 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:47.522 malloc0 00:18:47.522 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.522 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:47.522 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.522 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:47.522 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.522 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:47.522 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.522 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:47.782 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.782 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:47.782 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.782 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:47.782 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.782 10:46:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:47.782 00:18:47.782 00:18:47.782 CUnit - A unit testing framework for C - Version 2.1-3 00:18:47.782 http://cunit.sourceforge.net/ 00:18:47.782 00:18:47.782 00:18:47.782 Suite: nvme_compliance 00:18:47.782 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-19 10:46:26.893539] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:47.782 [2024-11-19 10:46:26.894828] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:47.782 [2024-11-19 10:46:26.894839] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:47.782 [2024-11-19 10:46:26.894844] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:47.782 [2024-11-19 10:46:26.896559] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:47.782 passed 00:18:47.782 Test: admin_identify_ctrlr_verify_fused ...[2024-11-19 10:46:26.973041] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.042 [2024-11-19 10:46:26.978079] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.043 passed 00:18:48.043 Test: admin_identify_ns ...[2024-11-19 10:46:27.054539] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.043 [2024-11-19 10:46:27.118169] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:48.043 [2024-11-19 10:46:27.126168] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:48.043 [2024-11-19 10:46:27.147254] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.043 passed 00:18:48.043 Test: admin_get_features_mandatory_features ...[2024-11-19 10:46:27.218514] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.043 [2024-11-19 10:46:27.223546] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.302 passed 00:18:48.303 Test: admin_get_features_optional_features ...[2024-11-19 10:46:27.299005] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.303 [2024-11-19 10:46:27.302020] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.303 passed 00:18:48.303 Test: admin_set_features_number_of_queues ...[2024-11-19 10:46:27.376513] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.303 [2024-11-19 10:46:27.481249] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.562 passed 00:18:48.562 Test: admin_get_log_page_mandatory_logs ...[2024-11-19 10:46:27.556273] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.562 [2024-11-19 10:46:27.559295] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.562 passed 00:18:48.562 Test: admin_get_log_page_with_lpo ...[2024-11-19 10:46:27.637523] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.562 [2024-11-19 10:46:27.706169] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:48.562 [2024-11-19 10:46:27.719206] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.562 passed 00:18:48.822 Test: fabric_property_get ...[2024-11-19 10:46:27.792407] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.822 [2024-11-19 10:46:27.793608] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:48.822 [2024-11-19 10:46:27.795426] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.822 passed 00:18:48.822 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-19 10:46:27.871869] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.822 [2024-11-19 10:46:27.873072] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:48.822 [2024-11-19 10:46:27.874889] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.822 passed 00:18:48.822 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-19 10:46:27.951508] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:49.082 [2024-11-19 10:46:28.034167] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:49.082 [2024-11-19 10:46:28.050162] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:49.082 [2024-11-19 10:46:28.055239] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:49.082 passed 00:18:49.082 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-19 10:46:28.130289] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:49.082 [2024-11-19 10:46:28.131499] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:49.082 [2024-11-19 10:46:28.133313] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:49.082 passed 00:18:49.082 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-19 10:46:28.208506] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:49.343 [2024-11-19 10:46:28.288164] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:49.343 [2024-11-19 10:46:28.312162] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:49.343 [2024-11-19 10:46:28.317232] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:49.343 passed 00:18:49.343 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-19 10:46:28.389414] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:49.343 [2024-11-19 10:46:28.390622] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:49.343 [2024-11-19 10:46:28.390639] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:49.343 [2024-11-19 10:46:28.392436] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:49.343 passed 00:18:49.343 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-19 10:46:28.469162] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:49.604 [2024-11-19 10:46:28.563162] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:49.604 [2024-11-19 10:46:28.571163] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:49.604 [2024-11-19 10:46:28.579162] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:49.604 [2024-11-19 10:46:28.587165] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:49.604 [2024-11-19 10:46:28.616224] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:49.604 passed 00:18:49.604 Test: admin_create_io_sq_verify_pc ...[2024-11-19 10:46:28.690410] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:49.604 [2024-11-19 10:46:28.707169] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:49.604 [2024-11-19 10:46:28.724583] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:49.604 passed 00:18:49.863 Test: admin_create_io_qp_max_qps ...[2024-11-19 10:46:28.800049] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:50.803 [2024-11-19 10:46:29.904166] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:18:51.372 [2024-11-19 10:46:30.297382] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:51.372 passed 00:18:51.372 Test: admin_create_io_sq_shared_cq ...[2024-11-19 10:46:30.371208] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:51.372 [2024-11-19 10:46:30.503166] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:51.372 [2024-11-19 10:46:30.540208] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:51.372 passed 00:18:51.372 00:18:51.372 Run Summary: Type Total Ran Passed Failed Inactive 00:18:51.372 suites 1 1 n/a 0 0 00:18:51.372 tests 18 18 18 0 0 00:18:51.372 asserts 360 360 360 0 n/a 00:18:51.372 00:18:51.372 Elapsed time = 1.499 seconds 00:18:51.632 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 983506 00:18:51.632 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 983506 ']' 00:18:51.632 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 983506 00:18:51.632 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:18:51.632 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.632 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 983506 00:18:51.632 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:51.632 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:51.632 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 983506' 00:18:51.632 killing process with pid 983506 00:18:51.632 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 983506 00:18:51.632 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 983506 00:18:51.632 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:51.632 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:51.632 00:18:51.632 real 0m6.192s 00:18:51.632 user 0m17.591s 00:18:51.632 sys 0m0.531s 00:18:51.632 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.632 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:51.632 ************************************ 00:18:51.632 END TEST nvmf_vfio_user_nvme_compliance 00:18:51.632 ************************************ 00:18:51.632 10:46:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:51.632 10:46:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:51.632 10:46:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:51.632 10:46:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:51.894 ************************************ 00:18:51.894 START TEST nvmf_vfio_user_fuzz 00:18:51.894 ************************************ 00:18:51.894 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:51.894 * Looking for test storage... 00:18:51.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:51.894 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:51.894 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:18:51.894 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:51.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.894 --rc genhtml_branch_coverage=1 00:18:51.894 --rc genhtml_function_coverage=1 00:18:51.894 --rc genhtml_legend=1 00:18:51.894 --rc geninfo_all_blocks=1 00:18:51.894 --rc geninfo_unexecuted_blocks=1 00:18:51.894 00:18:51.894 ' 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:51.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.894 --rc genhtml_branch_coverage=1 00:18:51.894 --rc genhtml_function_coverage=1 00:18:51.894 --rc genhtml_legend=1 00:18:51.894 --rc geninfo_all_blocks=1 00:18:51.894 --rc geninfo_unexecuted_blocks=1 00:18:51.894 00:18:51.894 ' 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:51.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.894 --rc genhtml_branch_coverage=1 00:18:51.894 --rc genhtml_function_coverage=1 00:18:51.894 --rc genhtml_legend=1 00:18:51.894 --rc geninfo_all_blocks=1 00:18:51.894 --rc geninfo_unexecuted_blocks=1 00:18:51.894 00:18:51.894 ' 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:51.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.894 --rc genhtml_branch_coverage=1 00:18:51.894 --rc genhtml_function_coverage=1 00:18:51.894 --rc genhtml_legend=1 00:18:51.894 --rc geninfo_all_blocks=1 00:18:51.894 --rc geninfo_unexecuted_blocks=1 00:18:51.894 00:18:51.894 ' 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:51.894 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:51.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=984811 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 984811' 00:18:51.895 Process pid: 984811 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 984811 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 984811 ']' 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.895 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:52.836 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.836 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:52.836 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:53.775 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:53.775 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.775 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:53.775 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.775 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:53.775 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:53.775 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.775 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:53.775 malloc0 00:18:53.775 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.775 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:53.775 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.775 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:54.035 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.035 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:54.035 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.035 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:54.035 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.035 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:54.035 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.035 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:54.035 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.035 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:54.035 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:26.142 Fuzzing completed. Shutting down the fuzz application 00:19:26.142 00:19:26.142 Dumping successful admin opcodes: 00:19:26.142 8, 9, 10, 24, 00:19:26.142 Dumping successful io opcodes: 00:19:26.142 0, 00:19:26.142 NS: 0x20000081ef00 I/O qp, Total commands completed: 1248050, total successful commands: 4899, random_seed: 4248476352 00:19:26.142 NS: 0x20000081ef00 admin qp, Total commands completed: 257088, total successful commands: 2075, random_seed: 1331289984 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 984811 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 984811 ']' 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 984811 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 984811 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 984811' 00:19:26.142 killing process with pid 984811 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 984811 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 984811 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:26.142 00:19:26.142 real 0m32.780s 00:19:26.142 user 0m34.940s 00:19:26.142 sys 0m25.866s 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:26.142 ************************************ 00:19:26.142 END TEST nvmf_vfio_user_fuzz 00:19:26.142 ************************************ 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:26.142 ************************************ 00:19:26.142 START TEST nvmf_auth_target 00:19:26.142 ************************************ 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:26.142 * Looking for test storage... 00:19:26.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:26.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.142 --rc genhtml_branch_coverage=1 00:19:26.142 --rc genhtml_function_coverage=1 00:19:26.142 --rc genhtml_legend=1 00:19:26.142 --rc geninfo_all_blocks=1 00:19:26.142 --rc geninfo_unexecuted_blocks=1 00:19:26.142 00:19:26.142 ' 00:19:26.142 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:26.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.142 --rc genhtml_branch_coverage=1 00:19:26.142 --rc genhtml_function_coverage=1 00:19:26.142 --rc genhtml_legend=1 00:19:26.142 --rc geninfo_all_blocks=1 00:19:26.142 --rc geninfo_unexecuted_blocks=1 00:19:26.142 00:19:26.142 ' 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:26.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.143 --rc genhtml_branch_coverage=1 00:19:26.143 --rc genhtml_function_coverage=1 00:19:26.143 --rc genhtml_legend=1 00:19:26.143 --rc geninfo_all_blocks=1 00:19:26.143 --rc geninfo_unexecuted_blocks=1 00:19:26.143 00:19:26.143 ' 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:26.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.143 --rc genhtml_branch_coverage=1 00:19:26.143 --rc genhtml_function_coverage=1 00:19:26.143 --rc genhtml_legend=1 00:19:26.143 --rc geninfo_all_blocks=1 00:19:26.143 --rc geninfo_unexecuted_blocks=1 00:19:26.143 00:19:26.143 ' 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:26.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:26.143 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.732 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:32.732 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:32.732 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:32.732 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:32.732 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:32.732 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:32.732 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:32.732 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:32.732 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:32.732 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:32.732 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:32.732 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:32.732 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:32.733 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:32.733 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:32.733 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:32.733 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:32.733 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:32.733 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:32.733 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:32.733 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:32.733 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:32.733 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:32.733 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:32.733 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:32.733 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:32.733 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:32.733 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:32.733 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:32.733 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:32.733 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:32.733 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:32.733 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:32.733 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:32.733 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:32.733 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:32.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:19:32.733 00:19:32.733 --- 10.0.0.2 ping statistics --- 00:19:32.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.733 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:32.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:19:32.733 00:19:32.733 --- 10.0.0.1 ping statistics --- 00:19:32.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.733 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:32.733 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:32.734 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:32.734 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:32.734 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.734 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.734 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=994904 00:19:32.734 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 994904 00:19:32.734 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:32.734 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 994904 ']' 00:19:32.734 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.734 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.734 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.734 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.734 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.306 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=994938 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=895c585038b6448ab618892cfd1d5eb0a93f89f687c6e078 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.YfY 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 895c585038b6448ab618892cfd1d5eb0a93f89f687c6e078 0 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 895c585038b6448ab618892cfd1d5eb0a93f89f687c6e078 0 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=895c585038b6448ab618892cfd1d5eb0a93f89f687c6e078 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.YfY 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.YfY 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.YfY 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b1b59df611bce5993298fda5c615537090a9f2258fcd2141e760dd0db7cc57ad 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.COM 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b1b59df611bce5993298fda5c615537090a9f2258fcd2141e760dd0db7cc57ad 3 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b1b59df611bce5993298fda5c615537090a9f2258fcd2141e760dd0db7cc57ad 3 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b1b59df611bce5993298fda5c615537090a9f2258fcd2141e760dd0db7cc57ad 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.COM 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.COM 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.COM 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6bff1461ce120f3a193452cf8928a12d 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.dK6 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6bff1461ce120f3a193452cf8928a12d 1 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6bff1461ce120f3a193452cf8928a12d 1 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6bff1461ce120f3a193452cf8928a12d 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.dK6 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.dK6 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.dK6 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8ffdc3f3fc2da8a95182c9c7d96556b93ec578140f560597 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.HwY 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8ffdc3f3fc2da8a95182c9c7d96556b93ec578140f560597 2 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8ffdc3f3fc2da8a95182c9c7d96556b93ec578140f560597 2 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8ffdc3f3fc2da8a95182c9c7d96556b93ec578140f560597 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:33.307 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.HwY 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.HwY 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.HwY 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e60f1959d1b3868dd0e3df8e784ce4a9d858cd0c44073ea7 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lUh 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e60f1959d1b3868dd0e3df8e784ce4a9d858cd0c44073ea7 2 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e60f1959d1b3868dd0e3df8e784ce4a9d858cd0c44073ea7 2 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e60f1959d1b3868dd0e3df8e784ce4a9d858cd0c44073ea7 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lUh 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lUh 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.lUh 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9e18f4e8c4fc5ce341831eae4b1a67f8 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.0Vr 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9e18f4e8c4fc5ce341831eae4b1a67f8 1 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9e18f4e8c4fc5ce341831eae4b1a67f8 1 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9e18f4e8c4fc5ce341831eae4b1a67f8 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.0Vr 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.0Vr 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.0Vr 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e7459cf8628c760bfe556b2eb66884e953dcc965ccb198589b77b1e532f8b70e 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.lVu 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e7459cf8628c760bfe556b2eb66884e953dcc965ccb198589b77b1e532f8b70e 3 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e7459cf8628c760bfe556b2eb66884e953dcc965ccb198589b77b1e532f8b70e 3 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e7459cf8628c760bfe556b2eb66884e953dcc965ccb198589b77b1e532f8b70e 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.lVu 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.lVu 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.lVu 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 994904 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 994904 ']' 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.571 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.833 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.833 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:33.833 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 994938 /var/tmp/host.sock 00:19:33.833 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 994938 ']' 00:19:33.833 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:33.833 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.833 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:33.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:33.833 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.833 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.094 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.094 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:34.094 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:34.094 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.094 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.094 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.094 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:34.094 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.YfY 00:19:34.094 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.094 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.094 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.094 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.YfY 00:19:34.094 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.YfY 00:19:34.355 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.COM ]] 00:19:34.355 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.COM 00:19:34.355 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.355 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.355 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.355 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.COM 00:19:34.355 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.COM 00:19:34.615 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:34.615 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dK6 00:19:34.615 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.615 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.615 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.615 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.dK6 00:19:34.615 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.dK6 00:19:34.615 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.HwY ]] 00:19:34.615 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HwY 00:19:34.615 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.615 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.615 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.615 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HwY 00:19:34.615 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HwY 00:19:34.876 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:34.876 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lUh 00:19:34.876 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.877 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.877 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.877 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.lUh 00:19:34.877 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.lUh 00:19:35.137 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.0Vr ]] 00:19:35.137 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0Vr 00:19:35.137 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.137 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.137 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.137 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0Vr 00:19:35.137 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0Vr 00:19:35.398 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:35.398 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.lVu 00:19:35.398 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.398 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.398 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.398 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.lVu 00:19:35.398 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.lVu 00:19:35.398 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:35.398 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:35.398 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.398 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.398 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:35.398 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:35.658 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:35.658 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.658 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:35.658 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:35.658 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:35.658 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.658 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.658 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.658 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.658 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.658 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.658 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.658 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.919 00:19:35.919 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.919 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.920 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.180 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.180 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.180 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.180 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.180 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.180 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.180 { 00:19:36.180 "cntlid": 1, 00:19:36.180 "qid": 0, 00:19:36.180 "state": "enabled", 00:19:36.180 "thread": "nvmf_tgt_poll_group_000", 00:19:36.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:36.180 "listen_address": { 00:19:36.180 "trtype": "TCP", 00:19:36.180 "adrfam": "IPv4", 00:19:36.180 "traddr": "10.0.0.2", 00:19:36.180 "trsvcid": "4420" 00:19:36.180 }, 00:19:36.180 "peer_address": { 00:19:36.180 "trtype": "TCP", 00:19:36.180 "adrfam": "IPv4", 00:19:36.180 "traddr": "10.0.0.1", 00:19:36.180 "trsvcid": "37438" 00:19:36.180 }, 00:19:36.180 "auth": { 00:19:36.180 "state": "completed", 00:19:36.180 "digest": "sha256", 00:19:36.180 "dhgroup": "null" 00:19:36.180 } 00:19:36.180 } 00:19:36.180 ]' 00:19:36.180 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.180 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.180 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.180 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:36.180 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.180 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.180 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.180 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.441 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:19:36.441 10:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:19:37.009 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.268 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.268 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.268 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.268 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.268 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.268 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:37.268 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:37.268 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:37.268 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.268 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.268 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:37.268 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:37.268 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.268 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.268 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.268 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.269 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.269 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.269 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.269 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.529 00:19:37.529 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.529 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.529 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.790 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.790 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.790 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.790 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.790 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.790 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.790 { 00:19:37.790 "cntlid": 3, 00:19:37.790 "qid": 0, 00:19:37.790 "state": "enabled", 00:19:37.790 "thread": "nvmf_tgt_poll_group_000", 00:19:37.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:37.790 "listen_address": { 00:19:37.790 "trtype": "TCP", 00:19:37.790 "adrfam": "IPv4", 00:19:37.790 "traddr": "10.0.0.2", 00:19:37.790 "trsvcid": "4420" 00:19:37.790 }, 00:19:37.790 "peer_address": { 00:19:37.790 "trtype": "TCP", 00:19:37.790 "adrfam": "IPv4", 00:19:37.790 "traddr": "10.0.0.1", 00:19:37.790 "trsvcid": "37460" 00:19:37.790 }, 00:19:37.790 "auth": { 00:19:37.790 "state": "completed", 00:19:37.790 "digest": "sha256", 00:19:37.790 "dhgroup": "null" 00:19:37.790 } 00:19:37.790 } 00:19:37.790 ]' 00:19:37.790 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.790 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.790 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.790 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:37.790 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.050 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.050 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.050 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.050 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:19:38.050 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:19:38.992 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.992 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.992 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.992 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.992 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.992 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.992 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:38.992 10:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:38.992 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:38.992 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.992 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:38.992 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:38.992 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:38.992 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.992 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.992 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.992 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.992 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.992 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.992 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.992 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.253 00:19:39.253 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.253 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.253 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.514 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.514 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.514 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.514 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.514 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.515 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.515 { 00:19:39.515 "cntlid": 5, 00:19:39.515 "qid": 0, 00:19:39.515 "state": "enabled", 00:19:39.515 "thread": "nvmf_tgt_poll_group_000", 00:19:39.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:39.515 "listen_address": { 00:19:39.515 "trtype": "TCP", 00:19:39.515 "adrfam": "IPv4", 00:19:39.515 "traddr": "10.0.0.2", 00:19:39.515 "trsvcid": "4420" 00:19:39.515 }, 00:19:39.515 "peer_address": { 00:19:39.515 "trtype": "TCP", 00:19:39.515 "adrfam": "IPv4", 00:19:39.515 "traddr": "10.0.0.1", 00:19:39.515 "trsvcid": "37492" 00:19:39.515 }, 00:19:39.515 "auth": { 00:19:39.515 "state": "completed", 00:19:39.515 "digest": "sha256", 00:19:39.515 "dhgroup": "null" 00:19:39.515 } 00:19:39.515 } 00:19:39.515 ]' 00:19:39.515 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.515 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.515 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.515 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:39.515 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.515 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.515 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.515 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.776 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:19:39.776 10:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:19:40.345 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.345 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.345 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.345 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.345 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.345 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.345 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:40.345 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:40.605 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:40.605 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.605 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:40.605 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:40.605 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:40.605 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.605 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:40.605 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.605 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.605 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.605 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:40.605 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.605 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.864 00:19:40.864 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.864 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.864 10:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.124 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.124 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.124 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.124 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.124 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.124 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.124 { 00:19:41.124 "cntlid": 7, 00:19:41.124 "qid": 0, 00:19:41.124 "state": "enabled", 00:19:41.124 "thread": "nvmf_tgt_poll_group_000", 00:19:41.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:41.124 "listen_address": { 00:19:41.124 "trtype": "TCP", 00:19:41.124 "adrfam": "IPv4", 00:19:41.124 "traddr": "10.0.0.2", 00:19:41.124 "trsvcid": "4420" 00:19:41.124 }, 00:19:41.124 "peer_address": { 00:19:41.124 "trtype": "TCP", 00:19:41.124 "adrfam": "IPv4", 00:19:41.124 "traddr": "10.0.0.1", 00:19:41.124 "trsvcid": "37518" 00:19:41.124 }, 00:19:41.124 "auth": { 00:19:41.124 "state": "completed", 00:19:41.124 "digest": "sha256", 00:19:41.124 "dhgroup": "null" 00:19:41.124 } 00:19:41.124 } 00:19:41.124 ]' 00:19:41.124 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.124 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.124 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.124 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:41.124 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.124 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.124 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.124 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.396 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:19:41.396 10:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:19:41.966 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.966 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.966 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.966 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.966 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.966 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.966 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.966 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:41.966 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.227 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:42.227 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.227 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:42.227 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:42.227 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:42.227 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.227 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.227 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.227 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.227 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.227 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.227 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.227 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.488 00:19:42.488 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.488 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.488 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.749 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.749 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.749 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.749 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.749 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.749 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.749 { 00:19:42.749 "cntlid": 9, 00:19:42.749 "qid": 0, 00:19:42.749 "state": "enabled", 00:19:42.749 "thread": "nvmf_tgt_poll_group_000", 00:19:42.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:42.749 "listen_address": { 00:19:42.749 "trtype": "TCP", 00:19:42.749 "adrfam": "IPv4", 00:19:42.749 "traddr": "10.0.0.2", 00:19:42.749 "trsvcid": "4420" 00:19:42.749 }, 00:19:42.749 "peer_address": { 00:19:42.749 "trtype": "TCP", 00:19:42.749 "adrfam": "IPv4", 00:19:42.749 "traddr": "10.0.0.1", 00:19:42.749 "trsvcid": "37534" 00:19:42.749 }, 00:19:42.749 "auth": { 00:19:42.749 "state": "completed", 00:19:42.749 "digest": "sha256", 00:19:42.749 "dhgroup": "ffdhe2048" 00:19:42.749 } 00:19:42.749 } 00:19:42.749 ]' 00:19:42.749 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.749 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.749 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.749 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:42.749 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.749 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.749 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.749 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.008 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:19:43.008 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:19:43.577 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.577 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:43.577 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.577 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.577 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.577 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.577 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:43.577 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:43.837 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:43.837 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.837 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.837 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:43.837 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:43.837 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.837 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.837 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.837 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.837 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.837 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.837 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.837 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.097 00:19:44.097 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.097 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.098 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.358 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.358 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.358 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.358 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.358 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.358 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.358 { 00:19:44.358 "cntlid": 11, 00:19:44.358 "qid": 0, 00:19:44.358 "state": "enabled", 00:19:44.358 "thread": "nvmf_tgt_poll_group_000", 00:19:44.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:44.358 "listen_address": { 00:19:44.358 "trtype": "TCP", 00:19:44.358 "adrfam": "IPv4", 00:19:44.358 "traddr": "10.0.0.2", 00:19:44.358 "trsvcid": "4420" 00:19:44.358 }, 00:19:44.358 "peer_address": { 00:19:44.358 "trtype": "TCP", 00:19:44.358 "adrfam": "IPv4", 00:19:44.358 "traddr": "10.0.0.1", 00:19:44.358 "trsvcid": "37548" 00:19:44.358 }, 00:19:44.358 "auth": { 00:19:44.358 "state": "completed", 00:19:44.358 "digest": "sha256", 00:19:44.358 "dhgroup": "ffdhe2048" 00:19:44.358 } 00:19:44.358 } 00:19:44.358 ]' 00:19:44.358 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.358 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.358 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.358 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:44.358 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.358 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.358 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.359 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.619 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:19:44.619 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:19:45.189 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.189 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.189 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.189 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.189 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.189 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.189 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:45.189 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:45.449 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:45.449 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.449 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:45.449 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:45.449 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:45.449 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.449 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.449 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.449 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.449 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.449 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.449 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.449 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.708 00:19:45.708 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.708 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.708 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.968 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.968 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.968 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.969 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.969 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.969 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.969 { 00:19:45.969 "cntlid": 13, 00:19:45.969 "qid": 0, 00:19:45.969 "state": "enabled", 00:19:45.969 "thread": "nvmf_tgt_poll_group_000", 00:19:45.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:45.969 "listen_address": { 00:19:45.969 "trtype": "TCP", 00:19:45.969 "adrfam": "IPv4", 00:19:45.969 "traddr": "10.0.0.2", 00:19:45.969 "trsvcid": "4420" 00:19:45.969 }, 00:19:45.969 "peer_address": { 00:19:45.969 "trtype": "TCP", 00:19:45.969 "adrfam": "IPv4", 00:19:45.969 "traddr": "10.0.0.1", 00:19:45.969 "trsvcid": "53744" 00:19:45.969 }, 00:19:45.969 "auth": { 00:19:45.969 "state": "completed", 00:19:45.969 "digest": "sha256", 00:19:45.969 "dhgroup": "ffdhe2048" 00:19:45.969 } 00:19:45.969 } 00:19:45.969 ]' 00:19:45.969 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.969 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.969 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.969 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:45.969 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.969 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.969 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.969 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.229 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:19:46.229 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:19:46.800 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.800 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:46.800 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.800 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.800 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.800 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.800 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:46.800 10:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:47.061 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:47.061 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.061 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.061 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:47.061 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:47.061 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.061 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:47.061 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.061 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.061 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.061 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:47.061 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.061 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.322 00:19:47.322 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.322 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.322 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.583 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.583 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.583 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.583 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.583 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.583 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.583 { 00:19:47.583 "cntlid": 15, 00:19:47.583 "qid": 0, 00:19:47.583 "state": "enabled", 00:19:47.583 "thread": "nvmf_tgt_poll_group_000", 00:19:47.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:47.583 "listen_address": { 00:19:47.583 "trtype": "TCP", 00:19:47.583 "adrfam": "IPv4", 00:19:47.583 "traddr": "10.0.0.2", 00:19:47.583 "trsvcid": "4420" 00:19:47.583 }, 00:19:47.583 "peer_address": { 00:19:47.583 "trtype": "TCP", 00:19:47.583 "adrfam": "IPv4", 00:19:47.583 "traddr": "10.0.0.1", 00:19:47.583 "trsvcid": "53752" 00:19:47.583 }, 00:19:47.583 "auth": { 00:19:47.583 "state": "completed", 00:19:47.583 "digest": "sha256", 00:19:47.583 "dhgroup": "ffdhe2048" 00:19:47.583 } 00:19:47.583 } 00:19:47.583 ]' 00:19:47.583 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.583 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.583 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.583 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:47.583 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.583 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.583 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.583 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.844 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:19:47.844 10:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:19:48.415 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.415 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:48.415 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.415 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.415 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.415 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.415 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.415 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:48.415 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:48.675 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:48.675 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.675 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.675 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:48.675 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:48.675 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.675 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.675 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.675 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.675 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.675 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.675 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.675 10:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.936 00:19:48.936 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.936 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.936 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.197 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.197 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.197 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.197 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.197 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.197 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.197 { 00:19:49.197 "cntlid": 17, 00:19:49.197 "qid": 0, 00:19:49.197 "state": "enabled", 00:19:49.197 "thread": "nvmf_tgt_poll_group_000", 00:19:49.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:49.197 "listen_address": { 00:19:49.197 "trtype": "TCP", 00:19:49.197 "adrfam": "IPv4", 00:19:49.197 "traddr": "10.0.0.2", 00:19:49.197 "trsvcid": "4420" 00:19:49.197 }, 00:19:49.197 "peer_address": { 00:19:49.197 "trtype": "TCP", 00:19:49.197 "adrfam": "IPv4", 00:19:49.197 "traddr": "10.0.0.1", 00:19:49.197 "trsvcid": "53784" 00:19:49.197 }, 00:19:49.197 "auth": { 00:19:49.197 "state": "completed", 00:19:49.197 "digest": "sha256", 00:19:49.197 "dhgroup": "ffdhe3072" 00:19:49.197 } 00:19:49.197 } 00:19:49.197 ]' 00:19:49.197 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.197 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.197 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.197 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:49.197 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.197 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.197 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.197 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.458 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:19:49.458 10:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:19:50.030 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.030 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:50.030 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.030 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.030 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.030 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.030 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:50.030 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:50.289 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:50.289 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.289 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.289 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:50.289 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:50.289 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.289 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.289 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.289 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.290 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.290 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.290 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.290 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.550 00:19:50.550 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.550 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.550 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.810 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.810 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.810 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.810 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.810 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.810 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.810 { 00:19:50.810 "cntlid": 19, 00:19:50.810 "qid": 0, 00:19:50.810 "state": "enabled", 00:19:50.810 "thread": "nvmf_tgt_poll_group_000", 00:19:50.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:50.810 "listen_address": { 00:19:50.810 "trtype": "TCP", 00:19:50.810 "adrfam": "IPv4", 00:19:50.810 "traddr": "10.0.0.2", 00:19:50.810 "trsvcid": "4420" 00:19:50.810 }, 00:19:50.810 "peer_address": { 00:19:50.810 "trtype": "TCP", 00:19:50.810 "adrfam": "IPv4", 00:19:50.810 "traddr": "10.0.0.1", 00:19:50.810 "trsvcid": "53810" 00:19:50.810 }, 00:19:50.810 "auth": { 00:19:50.810 "state": "completed", 00:19:50.810 "digest": "sha256", 00:19:50.810 "dhgroup": "ffdhe3072" 00:19:50.810 } 00:19:50.810 } 00:19:50.810 ]' 00:19:50.810 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.810 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.810 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.810 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:50.810 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.810 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.810 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.810 10:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.070 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:19:51.070 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:19:51.641 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.641 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.641 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.641 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.641 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.641 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.641 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:51.641 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:51.902 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:51.902 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.902 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.902 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:51.902 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:51.902 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.902 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.902 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.902 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.902 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.902 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.902 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.903 10:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.162 00:19:52.162 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.162 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.162 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.421 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.421 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.422 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.422 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.422 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.422 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.422 { 00:19:52.422 "cntlid": 21, 00:19:52.422 "qid": 0, 00:19:52.422 "state": "enabled", 00:19:52.422 "thread": "nvmf_tgt_poll_group_000", 00:19:52.422 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:52.422 "listen_address": { 00:19:52.422 "trtype": "TCP", 00:19:52.422 "adrfam": "IPv4", 00:19:52.422 "traddr": "10.0.0.2", 00:19:52.422 "trsvcid": "4420" 00:19:52.422 }, 00:19:52.422 "peer_address": { 00:19:52.422 "trtype": "TCP", 00:19:52.422 "adrfam": "IPv4", 00:19:52.422 "traddr": "10.0.0.1", 00:19:52.422 "trsvcid": "53842" 00:19:52.422 }, 00:19:52.422 "auth": { 00:19:52.422 "state": "completed", 00:19:52.422 "digest": "sha256", 00:19:52.422 "dhgroup": "ffdhe3072" 00:19:52.422 } 00:19:52.422 } 00:19:52.422 ]' 00:19:52.422 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.422 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.422 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.422 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:52.422 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.422 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.422 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.422 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.682 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:19:52.682 10:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:19:53.254 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.254 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.254 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.254 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.254 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.254 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.254 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.254 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.515 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:53.515 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.515 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.515 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:53.515 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:53.515 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.515 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:53.515 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.515 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.515 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.515 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:53.515 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.515 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.775 00:19:53.775 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.775 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.775 10:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.035 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.035 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.035 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.035 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.035 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.035 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.035 { 00:19:54.035 "cntlid": 23, 00:19:54.035 "qid": 0, 00:19:54.035 "state": "enabled", 00:19:54.035 "thread": "nvmf_tgt_poll_group_000", 00:19:54.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:54.035 "listen_address": { 00:19:54.035 "trtype": "TCP", 00:19:54.035 "adrfam": "IPv4", 00:19:54.035 "traddr": "10.0.0.2", 00:19:54.035 "trsvcid": "4420" 00:19:54.035 }, 00:19:54.035 "peer_address": { 00:19:54.035 "trtype": "TCP", 00:19:54.035 "adrfam": "IPv4", 00:19:54.035 "traddr": "10.0.0.1", 00:19:54.035 "trsvcid": "53872" 00:19:54.035 }, 00:19:54.035 "auth": { 00:19:54.035 "state": "completed", 00:19:54.035 "digest": "sha256", 00:19:54.035 "dhgroup": "ffdhe3072" 00:19:54.035 } 00:19:54.035 } 00:19:54.035 ]' 00:19:54.035 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.035 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.035 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.035 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:54.035 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.036 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.036 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.036 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.296 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:19:54.296 10:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:19:54.866 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.866 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.866 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.866 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.866 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.866 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.866 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.866 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.866 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.126 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:55.127 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.127 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.127 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:55.127 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:55.127 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.127 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.127 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.127 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.127 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.127 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.127 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.127 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.387 00:19:55.387 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.387 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.387 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.648 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.648 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.648 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.648 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.648 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.648 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.648 { 00:19:55.648 "cntlid": 25, 00:19:55.648 "qid": 0, 00:19:55.648 "state": "enabled", 00:19:55.648 "thread": "nvmf_tgt_poll_group_000", 00:19:55.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:55.648 "listen_address": { 00:19:55.648 "trtype": "TCP", 00:19:55.648 "adrfam": "IPv4", 00:19:55.648 "traddr": "10.0.0.2", 00:19:55.648 "trsvcid": "4420" 00:19:55.648 }, 00:19:55.648 "peer_address": { 00:19:55.648 "trtype": "TCP", 00:19:55.649 "adrfam": "IPv4", 00:19:55.649 "traddr": "10.0.0.1", 00:19:55.649 "trsvcid": "53886" 00:19:55.649 }, 00:19:55.649 "auth": { 00:19:55.649 "state": "completed", 00:19:55.649 "digest": "sha256", 00:19:55.649 "dhgroup": "ffdhe4096" 00:19:55.649 } 00:19:55.649 } 00:19:55.649 ]' 00:19:55.649 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.649 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.649 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.649 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:55.649 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.649 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.649 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.649 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.909 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:19:55.909 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.850 10:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.112 00:19:57.112 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.112 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.112 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.373 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.373 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.373 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.373 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.373 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.373 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.373 { 00:19:57.373 "cntlid": 27, 00:19:57.373 "qid": 0, 00:19:57.373 "state": "enabled", 00:19:57.373 "thread": "nvmf_tgt_poll_group_000", 00:19:57.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:57.373 "listen_address": { 00:19:57.373 "trtype": "TCP", 00:19:57.373 "adrfam": "IPv4", 00:19:57.373 "traddr": "10.0.0.2", 00:19:57.373 "trsvcid": "4420" 00:19:57.373 }, 00:19:57.373 "peer_address": { 00:19:57.373 "trtype": "TCP", 00:19:57.373 "adrfam": "IPv4", 00:19:57.373 "traddr": "10.0.0.1", 00:19:57.373 "trsvcid": "43684" 00:19:57.373 }, 00:19:57.373 "auth": { 00:19:57.373 "state": "completed", 00:19:57.373 "digest": "sha256", 00:19:57.373 "dhgroup": "ffdhe4096" 00:19:57.373 } 00:19:57.373 } 00:19:57.373 ]' 00:19:57.373 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.373 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.373 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.373 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:57.373 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.373 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.373 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.373 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.633 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:19:57.633 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:19:58.240 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.240 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:58.240 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.240 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.240 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.240 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.240 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:58.240 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:58.519 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:58.519 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.519 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.519 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:58.519 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:58.519 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.519 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.519 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.519 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.519 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.519 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.519 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.519 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.815 00:19:58.815 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.815 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.815 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.815 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.815 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.815 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.815 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.815 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.815 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.815 { 00:19:58.815 "cntlid": 29, 00:19:58.815 "qid": 0, 00:19:58.815 "state": "enabled", 00:19:58.815 "thread": "nvmf_tgt_poll_group_000", 00:19:58.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:58.815 "listen_address": { 00:19:58.815 "trtype": "TCP", 00:19:58.815 "adrfam": "IPv4", 00:19:58.815 "traddr": "10.0.0.2", 00:19:58.815 "trsvcid": "4420" 00:19:58.815 }, 00:19:58.815 "peer_address": { 00:19:58.815 "trtype": "TCP", 00:19:58.815 "adrfam": "IPv4", 00:19:58.815 "traddr": "10.0.0.1", 00:19:58.816 "trsvcid": "43714" 00:19:58.816 }, 00:19:58.816 "auth": { 00:19:58.816 "state": "completed", 00:19:58.816 "digest": "sha256", 00:19:58.816 "dhgroup": "ffdhe4096" 00:19:58.816 } 00:19:58.816 } 00:19:58.816 ]' 00:19:58.816 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.105 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.105 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.105 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:59.105 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.105 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.105 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.105 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.105 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:19:59.105 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:20:00.045 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.045 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:00.045 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.045 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.045 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.046 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.046 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:00.046 10:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:00.046 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:00.046 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.046 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.046 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:00.046 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:00.046 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.046 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:00.046 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.046 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.046 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.046 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:00.046 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:00.046 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:00.305 00:20:00.305 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.305 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.305 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.565 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.565 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.565 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.565 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.565 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.565 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.565 { 00:20:00.565 "cntlid": 31, 00:20:00.565 "qid": 0, 00:20:00.565 "state": "enabled", 00:20:00.565 "thread": "nvmf_tgt_poll_group_000", 00:20:00.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:00.565 "listen_address": { 00:20:00.565 "trtype": "TCP", 00:20:00.565 "adrfam": "IPv4", 00:20:00.565 "traddr": "10.0.0.2", 00:20:00.565 "trsvcid": "4420" 00:20:00.565 }, 00:20:00.565 "peer_address": { 00:20:00.565 "trtype": "TCP", 00:20:00.565 "adrfam": "IPv4", 00:20:00.565 "traddr": "10.0.0.1", 00:20:00.565 "trsvcid": "43740" 00:20:00.565 }, 00:20:00.565 "auth": { 00:20:00.565 "state": "completed", 00:20:00.565 "digest": "sha256", 00:20:00.565 "dhgroup": "ffdhe4096" 00:20:00.565 } 00:20:00.565 } 00:20:00.565 ]' 00:20:00.565 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.565 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.565 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.565 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:00.565 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.565 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.565 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.565 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.825 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:20:00.825 10:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:20:01.394 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.394 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.394 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.394 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.394 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.394 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.394 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.394 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.395 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.654 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:01.654 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.654 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:01.654 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:01.654 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:01.654 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.654 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.654 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.654 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.654 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.654 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.654 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.654 10:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.913 00:20:01.913 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.913 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.913 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.172 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.172 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.172 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.172 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.172 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.172 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.172 { 00:20:02.172 "cntlid": 33, 00:20:02.172 "qid": 0, 00:20:02.172 "state": "enabled", 00:20:02.172 "thread": "nvmf_tgt_poll_group_000", 00:20:02.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:02.172 "listen_address": { 00:20:02.172 "trtype": "TCP", 00:20:02.172 "adrfam": "IPv4", 00:20:02.172 "traddr": "10.0.0.2", 00:20:02.172 "trsvcid": "4420" 00:20:02.172 }, 00:20:02.172 "peer_address": { 00:20:02.172 "trtype": "TCP", 00:20:02.172 "adrfam": "IPv4", 00:20:02.172 "traddr": "10.0.0.1", 00:20:02.172 "trsvcid": "43774" 00:20:02.172 }, 00:20:02.172 "auth": { 00:20:02.172 "state": "completed", 00:20:02.172 "digest": "sha256", 00:20:02.172 "dhgroup": "ffdhe6144" 00:20:02.172 } 00:20:02.172 } 00:20:02.172 ]' 00:20:02.172 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.173 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.173 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.432 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:02.432 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.432 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.432 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.432 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.432 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:20:02.432 10:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:20:03.373 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.373 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:03.373 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.373 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.373 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.373 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.373 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:03.373 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:03.373 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:03.373 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.373 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.373 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:03.373 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:03.373 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.373 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.373 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.373 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.373 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.374 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.374 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.374 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.635 00:20:03.635 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.635 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.635 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.895 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.895 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.895 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.895 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.895 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.895 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.895 { 00:20:03.895 "cntlid": 35, 00:20:03.895 "qid": 0, 00:20:03.895 "state": "enabled", 00:20:03.895 "thread": "nvmf_tgt_poll_group_000", 00:20:03.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:03.895 "listen_address": { 00:20:03.895 "trtype": "TCP", 00:20:03.895 "adrfam": "IPv4", 00:20:03.895 "traddr": "10.0.0.2", 00:20:03.895 "trsvcid": "4420" 00:20:03.895 }, 00:20:03.895 "peer_address": { 00:20:03.895 "trtype": "TCP", 00:20:03.895 "adrfam": "IPv4", 00:20:03.895 "traddr": "10.0.0.1", 00:20:03.895 "trsvcid": "43796" 00:20:03.895 }, 00:20:03.895 "auth": { 00:20:03.895 "state": "completed", 00:20:03.895 "digest": "sha256", 00:20:03.895 "dhgroup": "ffdhe6144" 00:20:03.895 } 00:20:03.895 } 00:20:03.895 ]' 00:20:03.895 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.895 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.896 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.157 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:04.157 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.157 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.157 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.157 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.157 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:20:04.157 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:20:05.096 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.096 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:05.096 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.096 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.096 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.096 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.096 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.096 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.096 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:05.096 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.096 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.096 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:05.096 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:05.096 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.096 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.096 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.096 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.096 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.096 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.096 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.096 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.356 00:20:05.356 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.356 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.356 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.617 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.617 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.617 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.617 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.617 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.617 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.617 { 00:20:05.617 "cntlid": 37, 00:20:05.617 "qid": 0, 00:20:05.617 "state": "enabled", 00:20:05.617 "thread": "nvmf_tgt_poll_group_000", 00:20:05.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:05.617 "listen_address": { 00:20:05.617 "trtype": "TCP", 00:20:05.617 "adrfam": "IPv4", 00:20:05.617 "traddr": "10.0.0.2", 00:20:05.617 "trsvcid": "4420" 00:20:05.617 }, 00:20:05.617 "peer_address": { 00:20:05.617 "trtype": "TCP", 00:20:05.617 "adrfam": "IPv4", 00:20:05.617 "traddr": "10.0.0.1", 00:20:05.617 "trsvcid": "43830" 00:20:05.617 }, 00:20:05.617 "auth": { 00:20:05.617 "state": "completed", 00:20:05.617 "digest": "sha256", 00:20:05.617 "dhgroup": "ffdhe6144" 00:20:05.617 } 00:20:05.617 } 00:20:05.617 ]' 00:20:05.617 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.617 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.617 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.617 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:05.617 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.877 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.877 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.877 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.877 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:20:05.877 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:20:06.446 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.706 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:06.706 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.706 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.706 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.706 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.706 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:06.706 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:06.706 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:06.706 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.706 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:06.707 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:06.707 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:06.707 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.707 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:06.707 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.707 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.707 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.707 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:06.707 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:06.707 10:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.276 00:20:07.276 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.276 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.276 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.276 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.276 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.276 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.276 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.276 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.276 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.276 { 00:20:07.276 "cntlid": 39, 00:20:07.276 "qid": 0, 00:20:07.276 "state": "enabled", 00:20:07.276 "thread": "nvmf_tgt_poll_group_000", 00:20:07.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:07.276 "listen_address": { 00:20:07.276 "trtype": "TCP", 00:20:07.276 "adrfam": "IPv4", 00:20:07.276 "traddr": "10.0.0.2", 00:20:07.276 "trsvcid": "4420" 00:20:07.276 }, 00:20:07.276 "peer_address": { 00:20:07.276 "trtype": "TCP", 00:20:07.276 "adrfam": "IPv4", 00:20:07.276 "traddr": "10.0.0.1", 00:20:07.276 "trsvcid": "37776" 00:20:07.276 }, 00:20:07.276 "auth": { 00:20:07.276 "state": "completed", 00:20:07.276 "digest": "sha256", 00:20:07.276 "dhgroup": "ffdhe6144" 00:20:07.276 } 00:20:07.276 } 00:20:07.276 ]' 00:20:07.276 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.276 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.276 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.537 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:07.537 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.537 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.537 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.537 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.537 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:20:07.537 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.480 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.481 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.053 00:20:09.053 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.053 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.053 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.053 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.053 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.053 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.053 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.053 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.053 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.053 { 00:20:09.053 "cntlid": 41, 00:20:09.053 "qid": 0, 00:20:09.053 "state": "enabled", 00:20:09.053 "thread": "nvmf_tgt_poll_group_000", 00:20:09.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:09.053 "listen_address": { 00:20:09.053 "trtype": "TCP", 00:20:09.053 "adrfam": "IPv4", 00:20:09.053 "traddr": "10.0.0.2", 00:20:09.053 "trsvcid": "4420" 00:20:09.053 }, 00:20:09.053 "peer_address": { 00:20:09.053 "trtype": "TCP", 00:20:09.053 "adrfam": "IPv4", 00:20:09.053 "traddr": "10.0.0.1", 00:20:09.053 "trsvcid": "37804" 00:20:09.053 }, 00:20:09.053 "auth": { 00:20:09.053 "state": "completed", 00:20:09.053 "digest": "sha256", 00:20:09.053 "dhgroup": "ffdhe8192" 00:20:09.053 } 00:20:09.053 } 00:20:09.053 ]' 00:20:09.053 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.313 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.313 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.313 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:09.313 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.313 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.313 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.313 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.573 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:20:09.573 10:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:20:10.143 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.143 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.143 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.143 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.143 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.143 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.143 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:10.143 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:10.402 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:10.402 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.402 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.402 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:10.402 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:10.402 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.402 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.402 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.402 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.402 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.402 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.402 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.402 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.662 00:20:10.922 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.922 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.922 10:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.922 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.922 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.922 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.922 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.923 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.923 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.923 { 00:20:10.923 "cntlid": 43, 00:20:10.923 "qid": 0, 00:20:10.923 "state": "enabled", 00:20:10.923 "thread": "nvmf_tgt_poll_group_000", 00:20:10.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:10.923 "listen_address": { 00:20:10.923 "trtype": "TCP", 00:20:10.923 "adrfam": "IPv4", 00:20:10.923 "traddr": "10.0.0.2", 00:20:10.923 "trsvcid": "4420" 00:20:10.923 }, 00:20:10.923 "peer_address": { 00:20:10.923 "trtype": "TCP", 00:20:10.923 "adrfam": "IPv4", 00:20:10.923 "traddr": "10.0.0.1", 00:20:10.923 "trsvcid": "37826" 00:20:10.923 }, 00:20:10.923 "auth": { 00:20:10.923 "state": "completed", 00:20:10.923 "digest": "sha256", 00:20:10.923 "dhgroup": "ffdhe8192" 00:20:10.923 } 00:20:10.923 } 00:20:10.923 ]' 00:20:10.923 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.923 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.923 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.182 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:11.182 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.182 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.182 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.182 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.442 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:20:11.442 10:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:20:12.012 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.012 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:12.012 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.012 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.012 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.012 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.012 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:12.012 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:12.272 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:12.272 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.272 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.272 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:12.272 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:12.272 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.272 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.272 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.272 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.272 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.272 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.272 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.272 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.532 00:20:12.792 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.792 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.792 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.792 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.792 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.792 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.792 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.792 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.792 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.792 { 00:20:12.792 "cntlid": 45, 00:20:12.792 "qid": 0, 00:20:12.792 "state": "enabled", 00:20:12.792 "thread": "nvmf_tgt_poll_group_000", 00:20:12.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:12.792 "listen_address": { 00:20:12.792 "trtype": "TCP", 00:20:12.792 "adrfam": "IPv4", 00:20:12.792 "traddr": "10.0.0.2", 00:20:12.792 "trsvcid": "4420" 00:20:12.792 }, 00:20:12.792 "peer_address": { 00:20:12.792 "trtype": "TCP", 00:20:12.792 "adrfam": "IPv4", 00:20:12.792 "traddr": "10.0.0.1", 00:20:12.792 "trsvcid": "37844" 00:20:12.792 }, 00:20:12.792 "auth": { 00:20:12.792 "state": "completed", 00:20:12.792 "digest": "sha256", 00:20:12.792 "dhgroup": "ffdhe8192" 00:20:12.792 } 00:20:12.792 } 00:20:12.792 ]' 00:20:12.792 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.792 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.792 10:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.052 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:13.052 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.052 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.052 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.052 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.312 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:20:13.312 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:20:13.883 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.883 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:13.883 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.883 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.883 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.883 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.883 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:13.883 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:14.144 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:14.144 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.144 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.144 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:14.144 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:14.144 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.144 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:14.144 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.144 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.144 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.144 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:14.144 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.144 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.404 00:20:14.404 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.664 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.664 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.664 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.664 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.664 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.664 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.664 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.665 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.665 { 00:20:14.665 "cntlid": 47, 00:20:14.665 "qid": 0, 00:20:14.665 "state": "enabled", 00:20:14.665 "thread": "nvmf_tgt_poll_group_000", 00:20:14.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:14.665 "listen_address": { 00:20:14.665 "trtype": "TCP", 00:20:14.665 "adrfam": "IPv4", 00:20:14.665 "traddr": "10.0.0.2", 00:20:14.665 "trsvcid": "4420" 00:20:14.665 }, 00:20:14.665 "peer_address": { 00:20:14.665 "trtype": "TCP", 00:20:14.665 "adrfam": "IPv4", 00:20:14.665 "traddr": "10.0.0.1", 00:20:14.665 "trsvcid": "37870" 00:20:14.665 }, 00:20:14.665 "auth": { 00:20:14.665 "state": "completed", 00:20:14.665 "digest": "sha256", 00:20:14.665 "dhgroup": "ffdhe8192" 00:20:14.665 } 00:20:14.665 } 00:20:14.665 ]' 00:20:14.665 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.665 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.665 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.926 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:14.926 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.926 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.926 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.926 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.926 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:20:14.926 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.867 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.127 00:20:16.127 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.127 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.127 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.389 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.389 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.389 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.389 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.389 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.389 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.389 { 00:20:16.389 "cntlid": 49, 00:20:16.389 "qid": 0, 00:20:16.389 "state": "enabled", 00:20:16.389 "thread": "nvmf_tgt_poll_group_000", 00:20:16.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:16.389 "listen_address": { 00:20:16.389 "trtype": "TCP", 00:20:16.389 "adrfam": "IPv4", 00:20:16.389 "traddr": "10.0.0.2", 00:20:16.389 "trsvcid": "4420" 00:20:16.389 }, 00:20:16.389 "peer_address": { 00:20:16.389 "trtype": "TCP", 00:20:16.389 "adrfam": "IPv4", 00:20:16.389 "traddr": "10.0.0.1", 00:20:16.389 "trsvcid": "49372" 00:20:16.389 }, 00:20:16.389 "auth": { 00:20:16.389 "state": "completed", 00:20:16.389 "digest": "sha384", 00:20:16.389 "dhgroup": "null" 00:20:16.389 } 00:20:16.389 } 00:20:16.389 ]' 00:20:16.389 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.389 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.389 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.389 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:16.389 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.389 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.389 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.389 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.650 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:20:16.650 10:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:20:17.221 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.221 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:17.221 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.222 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.222 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.222 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.222 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:17.222 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:17.482 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:17.482 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.482 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:17.482 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:17.482 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:17.482 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.482 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.482 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.482 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.482 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.482 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.482 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.482 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.741 00:20:17.741 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.741 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.741 10:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.001 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.001 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.001 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.001 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.001 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.001 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.001 { 00:20:18.001 "cntlid": 51, 00:20:18.001 "qid": 0, 00:20:18.001 "state": "enabled", 00:20:18.001 "thread": "nvmf_tgt_poll_group_000", 00:20:18.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:18.001 "listen_address": { 00:20:18.001 "trtype": "TCP", 00:20:18.001 "adrfam": "IPv4", 00:20:18.001 "traddr": "10.0.0.2", 00:20:18.001 "trsvcid": "4420" 00:20:18.001 }, 00:20:18.001 "peer_address": { 00:20:18.001 "trtype": "TCP", 00:20:18.001 "adrfam": "IPv4", 00:20:18.001 "traddr": "10.0.0.1", 00:20:18.001 "trsvcid": "49402" 00:20:18.001 }, 00:20:18.001 "auth": { 00:20:18.001 "state": "completed", 00:20:18.001 "digest": "sha384", 00:20:18.001 "dhgroup": "null" 00:20:18.001 } 00:20:18.001 } 00:20:18.001 ]' 00:20:18.001 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.001 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.001 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.001 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:18.001 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.001 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.001 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.001 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.261 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:20:18.261 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:20:18.831 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.096 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:19.096 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.096 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.096 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.096 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.097 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:19.097 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:19.097 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:19.097 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.097 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.097 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:19.097 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:19.097 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.097 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.097 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.097 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.097 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.097 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.097 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.097 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.357 00:20:19.357 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.357 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.357 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.616 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.616 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.616 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.616 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.616 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.616 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.616 { 00:20:19.616 "cntlid": 53, 00:20:19.616 "qid": 0, 00:20:19.616 "state": "enabled", 00:20:19.616 "thread": "nvmf_tgt_poll_group_000", 00:20:19.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:19.616 "listen_address": { 00:20:19.616 "trtype": "TCP", 00:20:19.616 "adrfam": "IPv4", 00:20:19.616 "traddr": "10.0.0.2", 00:20:19.616 "trsvcid": "4420" 00:20:19.616 }, 00:20:19.616 "peer_address": { 00:20:19.616 "trtype": "TCP", 00:20:19.616 "adrfam": "IPv4", 00:20:19.616 "traddr": "10.0.0.1", 00:20:19.616 "trsvcid": "49436" 00:20:19.616 }, 00:20:19.616 "auth": { 00:20:19.616 "state": "completed", 00:20:19.616 "digest": "sha384", 00:20:19.616 "dhgroup": "null" 00:20:19.616 } 00:20:19.616 } 00:20:19.616 ]' 00:20:19.616 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.616 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.616 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.616 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:19.616 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.616 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.616 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.616 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.876 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:20:19.876 10:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:20:20.447 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.447 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:20.447 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.447 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.707 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.707 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.707 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:20.707 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:20.707 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:20.707 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.707 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.707 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:20.707 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:20.707 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.707 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:20.707 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.708 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.708 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.708 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:20.708 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.708 10:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.968 00:20:20.968 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.968 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.968 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.229 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.229 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.229 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.229 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.229 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.229 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.229 { 00:20:21.229 "cntlid": 55, 00:20:21.229 "qid": 0, 00:20:21.229 "state": "enabled", 00:20:21.229 "thread": "nvmf_tgt_poll_group_000", 00:20:21.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:21.229 "listen_address": { 00:20:21.229 "trtype": "TCP", 00:20:21.229 "adrfam": "IPv4", 00:20:21.229 "traddr": "10.0.0.2", 00:20:21.229 "trsvcid": "4420" 00:20:21.229 }, 00:20:21.229 "peer_address": { 00:20:21.229 "trtype": "TCP", 00:20:21.229 "adrfam": "IPv4", 00:20:21.229 "traddr": "10.0.0.1", 00:20:21.229 "trsvcid": "49478" 00:20:21.229 }, 00:20:21.229 "auth": { 00:20:21.229 "state": "completed", 00:20:21.229 "digest": "sha384", 00:20:21.229 "dhgroup": "null" 00:20:21.229 } 00:20:21.229 } 00:20:21.229 ]' 00:20:21.229 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.229 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.229 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.229 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:21.229 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.229 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.229 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.229 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.490 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:20:21.490 10:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:20:22.060 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.060 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:22.060 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.060 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.060 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.060 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.060 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.060 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.061 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.321 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:22.321 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.321 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.321 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:22.321 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:22.321 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.321 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.321 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.321 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.321 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.321 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.321 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.321 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.581 00:20:22.581 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.581 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.581 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.841 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.841 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.841 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.841 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.842 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.842 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.842 { 00:20:22.842 "cntlid": 57, 00:20:22.842 "qid": 0, 00:20:22.842 "state": "enabled", 00:20:22.842 "thread": "nvmf_tgt_poll_group_000", 00:20:22.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:22.842 "listen_address": { 00:20:22.842 "trtype": "TCP", 00:20:22.842 "adrfam": "IPv4", 00:20:22.842 "traddr": "10.0.0.2", 00:20:22.842 "trsvcid": "4420" 00:20:22.842 }, 00:20:22.842 "peer_address": { 00:20:22.842 "trtype": "TCP", 00:20:22.842 "adrfam": "IPv4", 00:20:22.842 "traddr": "10.0.0.1", 00:20:22.842 "trsvcid": "49502" 00:20:22.842 }, 00:20:22.842 "auth": { 00:20:22.842 "state": "completed", 00:20:22.842 "digest": "sha384", 00:20:22.842 "dhgroup": "ffdhe2048" 00:20:22.842 } 00:20:22.842 } 00:20:22.842 ]' 00:20:22.842 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.842 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.842 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.842 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:22.842 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.842 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.842 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.842 10:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.101 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:20:23.102 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:20:23.672 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.672 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:23.672 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.672 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.672 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.672 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.672 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.672 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.931 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:23.931 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.931 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.931 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:23.931 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:23.931 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.931 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.931 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.931 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.931 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.932 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.932 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.932 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.192 00:20:24.192 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.192 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.192 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.453 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.453 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.453 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.453 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.453 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.453 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.453 { 00:20:24.453 "cntlid": 59, 00:20:24.453 "qid": 0, 00:20:24.453 "state": "enabled", 00:20:24.453 "thread": "nvmf_tgt_poll_group_000", 00:20:24.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:24.453 "listen_address": { 00:20:24.453 "trtype": "TCP", 00:20:24.453 "adrfam": "IPv4", 00:20:24.453 "traddr": "10.0.0.2", 00:20:24.453 "trsvcid": "4420" 00:20:24.454 }, 00:20:24.454 "peer_address": { 00:20:24.454 "trtype": "TCP", 00:20:24.454 "adrfam": "IPv4", 00:20:24.454 "traddr": "10.0.0.1", 00:20:24.454 "trsvcid": "49510" 00:20:24.454 }, 00:20:24.454 "auth": { 00:20:24.454 "state": "completed", 00:20:24.454 "digest": "sha384", 00:20:24.454 "dhgroup": "ffdhe2048" 00:20:24.454 } 00:20:24.454 } 00:20:24.454 ]' 00:20:24.454 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.454 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.454 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.454 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:24.454 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.454 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.454 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.454 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.713 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:20:24.713 10:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:20:25.282 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.282 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:25.282 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.282 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.282 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.282 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.282 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.282 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.543 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:25.543 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.543 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.543 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:25.543 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:25.543 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.543 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.543 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.543 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.543 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.543 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.543 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.543 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.803 00:20:25.803 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.803 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.803 10:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.062 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.062 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.062 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.062 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.062 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.062 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.062 { 00:20:26.062 "cntlid": 61, 00:20:26.062 "qid": 0, 00:20:26.062 "state": "enabled", 00:20:26.062 "thread": "nvmf_tgt_poll_group_000", 00:20:26.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:26.062 "listen_address": { 00:20:26.062 "trtype": "TCP", 00:20:26.062 "adrfam": "IPv4", 00:20:26.062 "traddr": "10.0.0.2", 00:20:26.062 "trsvcid": "4420" 00:20:26.062 }, 00:20:26.062 "peer_address": { 00:20:26.062 "trtype": "TCP", 00:20:26.062 "adrfam": "IPv4", 00:20:26.062 "traddr": "10.0.0.1", 00:20:26.062 "trsvcid": "34178" 00:20:26.062 }, 00:20:26.062 "auth": { 00:20:26.062 "state": "completed", 00:20:26.062 "digest": "sha384", 00:20:26.062 "dhgroup": "ffdhe2048" 00:20:26.062 } 00:20:26.062 } 00:20:26.062 ]' 00:20:26.062 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.062 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.062 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.062 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:26.062 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.062 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.062 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.062 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.322 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:20:26.322 10:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:20:26.892 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.892 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:26.892 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.892 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.892 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.892 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.892 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:26.892 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.152 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:27.152 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.152 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.152 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:27.152 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:27.152 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.152 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:27.152 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.152 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.152 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.152 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:27.152 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.152 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.413 00:20:27.413 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.413 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.413 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.675 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.675 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.675 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.675 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.675 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.675 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.675 { 00:20:27.675 "cntlid": 63, 00:20:27.675 "qid": 0, 00:20:27.675 "state": "enabled", 00:20:27.675 "thread": "nvmf_tgt_poll_group_000", 00:20:27.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:27.675 "listen_address": { 00:20:27.675 "trtype": "TCP", 00:20:27.675 "adrfam": "IPv4", 00:20:27.675 "traddr": "10.0.0.2", 00:20:27.675 "trsvcid": "4420" 00:20:27.675 }, 00:20:27.675 "peer_address": { 00:20:27.675 "trtype": "TCP", 00:20:27.675 "adrfam": "IPv4", 00:20:27.675 "traddr": "10.0.0.1", 00:20:27.675 "trsvcid": "34212" 00:20:27.675 }, 00:20:27.675 "auth": { 00:20:27.675 "state": "completed", 00:20:27.675 "digest": "sha384", 00:20:27.675 "dhgroup": "ffdhe2048" 00:20:27.675 } 00:20:27.675 } 00:20:27.675 ]' 00:20:27.675 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.675 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.675 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.675 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:27.675 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.675 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.675 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.675 10:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.935 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:20:27.935 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:20:28.505 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.505 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:28.505 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.505 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.765 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.765 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.765 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.765 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.765 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.765 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:28.765 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.765 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.765 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:28.765 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:28.765 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.765 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.765 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.765 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.765 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.765 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.765 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.765 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.025 00:20:29.025 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.025 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.025 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.285 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.285 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.285 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.285 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.285 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.285 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.285 { 00:20:29.285 "cntlid": 65, 00:20:29.285 "qid": 0, 00:20:29.285 "state": "enabled", 00:20:29.285 "thread": "nvmf_tgt_poll_group_000", 00:20:29.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:29.285 "listen_address": { 00:20:29.285 "trtype": "TCP", 00:20:29.285 "adrfam": "IPv4", 00:20:29.285 "traddr": "10.0.0.2", 00:20:29.285 "trsvcid": "4420" 00:20:29.285 }, 00:20:29.285 "peer_address": { 00:20:29.285 "trtype": "TCP", 00:20:29.285 "adrfam": "IPv4", 00:20:29.285 "traddr": "10.0.0.1", 00:20:29.285 "trsvcid": "34244" 00:20:29.285 }, 00:20:29.285 "auth": { 00:20:29.285 "state": "completed", 00:20:29.285 "digest": "sha384", 00:20:29.285 "dhgroup": "ffdhe3072" 00:20:29.285 } 00:20:29.285 } 00:20:29.285 ]' 00:20:29.285 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.285 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.285 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.285 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:29.285 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.285 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.285 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.285 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.545 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:20:29.545 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:20:30.115 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.115 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.115 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.115 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.376 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.376 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.376 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.376 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.376 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:30.376 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.376 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.376 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:30.376 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:30.376 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.376 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.376 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.376 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.376 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.376 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.376 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.376 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.637 00:20:30.637 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.637 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.637 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.898 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.898 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.898 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.898 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.898 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.898 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.898 { 00:20:30.898 "cntlid": 67, 00:20:30.898 "qid": 0, 00:20:30.898 "state": "enabled", 00:20:30.898 "thread": "nvmf_tgt_poll_group_000", 00:20:30.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:30.898 "listen_address": { 00:20:30.898 "trtype": "TCP", 00:20:30.898 "adrfam": "IPv4", 00:20:30.898 "traddr": "10.0.0.2", 00:20:30.898 "trsvcid": "4420" 00:20:30.898 }, 00:20:30.898 "peer_address": { 00:20:30.898 "trtype": "TCP", 00:20:30.898 "adrfam": "IPv4", 00:20:30.898 "traddr": "10.0.0.1", 00:20:30.898 "trsvcid": "34276" 00:20:30.898 }, 00:20:30.898 "auth": { 00:20:30.898 "state": "completed", 00:20:30.898 "digest": "sha384", 00:20:30.898 "dhgroup": "ffdhe3072" 00:20:30.898 } 00:20:30.898 } 00:20:30.898 ]' 00:20:30.898 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.898 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.898 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.898 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:30.898 10:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.898 10:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.898 10:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.898 10:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.160 10:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:20:31.160 10:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:20:31.730 10:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.730 10:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:31.730 10:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.730 10:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.730 10:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.730 10:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.731 10:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:31.731 10:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:31.990 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:31.990 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.990 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.990 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:31.990 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:31.990 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.990 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.990 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.990 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.990 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.990 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.990 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.990 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.251 00:20:32.251 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.251 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.251 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.512 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.512 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.512 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.512 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.512 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.512 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.512 { 00:20:32.512 "cntlid": 69, 00:20:32.512 "qid": 0, 00:20:32.512 "state": "enabled", 00:20:32.512 "thread": "nvmf_tgt_poll_group_000", 00:20:32.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:32.512 "listen_address": { 00:20:32.512 "trtype": "TCP", 00:20:32.512 "adrfam": "IPv4", 00:20:32.512 "traddr": "10.0.0.2", 00:20:32.512 "trsvcid": "4420" 00:20:32.512 }, 00:20:32.512 "peer_address": { 00:20:32.512 "trtype": "TCP", 00:20:32.512 "adrfam": "IPv4", 00:20:32.512 "traddr": "10.0.0.1", 00:20:32.512 "trsvcid": "34300" 00:20:32.512 }, 00:20:32.512 "auth": { 00:20:32.512 "state": "completed", 00:20:32.512 "digest": "sha384", 00:20:32.512 "dhgroup": "ffdhe3072" 00:20:32.512 } 00:20:32.512 } 00:20:32.512 ]' 00:20:32.512 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.512 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.512 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.512 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:32.512 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.512 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.512 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.512 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.773 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:20:32.773 10:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:20:33.344 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.344 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:33.344 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.344 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.344 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.344 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.344 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:33.345 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:33.604 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:33.604 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.604 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.605 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:33.605 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:33.605 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.605 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:33.605 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.605 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.605 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.605 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:33.605 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:33.605 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:33.865 00:20:33.865 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.865 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.865 10:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.125 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.125 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.125 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.125 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.125 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.125 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.125 { 00:20:34.125 "cntlid": 71, 00:20:34.125 "qid": 0, 00:20:34.125 "state": "enabled", 00:20:34.125 "thread": "nvmf_tgt_poll_group_000", 00:20:34.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:34.125 "listen_address": { 00:20:34.125 "trtype": "TCP", 00:20:34.125 "adrfam": "IPv4", 00:20:34.125 "traddr": "10.0.0.2", 00:20:34.125 "trsvcid": "4420" 00:20:34.125 }, 00:20:34.125 "peer_address": { 00:20:34.125 "trtype": "TCP", 00:20:34.125 "adrfam": "IPv4", 00:20:34.126 "traddr": "10.0.0.1", 00:20:34.126 "trsvcid": "34334" 00:20:34.126 }, 00:20:34.126 "auth": { 00:20:34.126 "state": "completed", 00:20:34.126 "digest": "sha384", 00:20:34.126 "dhgroup": "ffdhe3072" 00:20:34.126 } 00:20:34.126 } 00:20:34.126 ]' 00:20:34.126 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.126 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.126 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.126 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:34.126 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.126 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.126 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.126 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.386 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:20:34.386 10:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:20:34.956 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.956 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:34.956 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.956 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.956 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.956 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.956 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.956 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.956 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.217 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:35.217 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.217 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.217 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:35.217 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:35.217 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.217 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.217 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.217 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.217 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.217 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.217 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.217 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.477 00:20:35.477 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.477 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.477 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.737 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.738 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.738 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.738 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.738 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.738 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.738 { 00:20:35.738 "cntlid": 73, 00:20:35.738 "qid": 0, 00:20:35.738 "state": "enabled", 00:20:35.738 "thread": "nvmf_tgt_poll_group_000", 00:20:35.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:35.738 "listen_address": { 00:20:35.738 "trtype": "TCP", 00:20:35.738 "adrfam": "IPv4", 00:20:35.738 "traddr": "10.0.0.2", 00:20:35.738 "trsvcid": "4420" 00:20:35.738 }, 00:20:35.738 "peer_address": { 00:20:35.738 "trtype": "TCP", 00:20:35.738 "adrfam": "IPv4", 00:20:35.738 "traddr": "10.0.0.1", 00:20:35.738 "trsvcid": "34368" 00:20:35.738 }, 00:20:35.738 "auth": { 00:20:35.738 "state": "completed", 00:20:35.738 "digest": "sha384", 00:20:35.738 "dhgroup": "ffdhe4096" 00:20:35.738 } 00:20:35.738 } 00:20:35.738 ]' 00:20:35.738 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.738 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.738 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.738 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.738 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.738 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.738 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.738 10:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.998 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:20:35.998 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:20:36.568 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.828 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.088 00:20:37.088 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.088 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.088 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.348 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.348 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.348 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.348 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.348 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.348 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.348 { 00:20:37.348 "cntlid": 75, 00:20:37.348 "qid": 0, 00:20:37.348 "state": "enabled", 00:20:37.348 "thread": "nvmf_tgt_poll_group_000", 00:20:37.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:37.348 "listen_address": { 00:20:37.348 "trtype": "TCP", 00:20:37.348 "adrfam": "IPv4", 00:20:37.348 "traddr": "10.0.0.2", 00:20:37.348 "trsvcid": "4420" 00:20:37.348 }, 00:20:37.348 "peer_address": { 00:20:37.348 "trtype": "TCP", 00:20:37.348 "adrfam": "IPv4", 00:20:37.348 "traddr": "10.0.0.1", 00:20:37.348 "trsvcid": "36304" 00:20:37.348 }, 00:20:37.348 "auth": { 00:20:37.348 "state": "completed", 00:20:37.348 "digest": "sha384", 00:20:37.348 "dhgroup": "ffdhe4096" 00:20:37.348 } 00:20:37.348 } 00:20:37.348 ]' 00:20:37.348 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.348 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.348 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.348 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.348 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.610 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.610 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.610 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.610 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:20:37.610 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:20:38.180 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.440 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:38.440 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.440 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.440 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.440 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.440 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.440 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.440 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:38.440 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.440 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.440 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:38.440 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:38.440 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.440 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.440 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.440 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.440 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.440 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.440 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.441 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.700 00:20:38.700 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.700 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.700 10:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.960 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.960 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.960 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.960 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.960 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.960 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.960 { 00:20:38.960 "cntlid": 77, 00:20:38.960 "qid": 0, 00:20:38.960 "state": "enabled", 00:20:38.960 "thread": "nvmf_tgt_poll_group_000", 00:20:38.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:38.960 "listen_address": { 00:20:38.961 "trtype": "TCP", 00:20:38.961 "adrfam": "IPv4", 00:20:38.961 "traddr": "10.0.0.2", 00:20:38.961 "trsvcid": "4420" 00:20:38.961 }, 00:20:38.961 "peer_address": { 00:20:38.961 "trtype": "TCP", 00:20:38.961 "adrfam": "IPv4", 00:20:38.961 "traddr": "10.0.0.1", 00:20:38.961 "trsvcid": "36322" 00:20:38.961 }, 00:20:38.961 "auth": { 00:20:38.961 "state": "completed", 00:20:38.961 "digest": "sha384", 00:20:38.961 "dhgroup": "ffdhe4096" 00:20:38.961 } 00:20:38.961 } 00:20:38.961 ]' 00:20:38.961 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.961 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.961 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.961 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:38.961 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.220 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.220 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.220 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.220 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:20:39.221 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:20:39.791 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.051 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:40.051 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.051 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.051 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.051 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.051 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:40.051 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:40.051 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:40.051 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.051 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.051 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:40.051 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:40.051 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.051 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:40.051 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.051 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.051 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.051 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:40.051 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.051 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.311 00:20:40.311 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.311 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.311 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.571 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.571 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.571 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.571 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.571 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.571 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.571 { 00:20:40.571 "cntlid": 79, 00:20:40.571 "qid": 0, 00:20:40.571 "state": "enabled", 00:20:40.571 "thread": "nvmf_tgt_poll_group_000", 00:20:40.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:40.571 "listen_address": { 00:20:40.571 "trtype": "TCP", 00:20:40.571 "adrfam": "IPv4", 00:20:40.571 "traddr": "10.0.0.2", 00:20:40.571 "trsvcid": "4420" 00:20:40.571 }, 00:20:40.571 "peer_address": { 00:20:40.571 "trtype": "TCP", 00:20:40.571 "adrfam": "IPv4", 00:20:40.571 "traddr": "10.0.0.1", 00:20:40.571 "trsvcid": "36346" 00:20:40.571 }, 00:20:40.571 "auth": { 00:20:40.571 "state": "completed", 00:20:40.571 "digest": "sha384", 00:20:40.571 "dhgroup": "ffdhe4096" 00:20:40.571 } 00:20:40.571 } 00:20:40.571 ]' 00:20:40.571 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.571 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.571 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.831 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:40.831 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.831 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.831 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.831 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.831 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:20:40.831 10:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:20:41.771 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.771 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.771 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.771 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.771 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.771 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.771 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.771 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.796 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.796 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:41.796 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.796 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.796 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:41.796 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:41.796 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.796 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.796 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.796 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.796 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.796 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.796 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.796 10:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.366 00:20:42.366 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.366 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.366 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.366 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.366 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.366 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.366 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.366 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.366 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.366 { 00:20:42.366 "cntlid": 81, 00:20:42.366 "qid": 0, 00:20:42.366 "state": "enabled", 00:20:42.366 "thread": "nvmf_tgt_poll_group_000", 00:20:42.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:42.366 "listen_address": { 00:20:42.366 "trtype": "TCP", 00:20:42.366 "adrfam": "IPv4", 00:20:42.366 "traddr": "10.0.0.2", 00:20:42.366 "trsvcid": "4420" 00:20:42.366 }, 00:20:42.366 "peer_address": { 00:20:42.366 "trtype": "TCP", 00:20:42.366 "adrfam": "IPv4", 00:20:42.366 "traddr": "10.0.0.1", 00:20:42.366 "trsvcid": "36382" 00:20:42.366 }, 00:20:42.366 "auth": { 00:20:42.366 "state": "completed", 00:20:42.366 "digest": "sha384", 00:20:42.366 "dhgroup": "ffdhe6144" 00:20:42.366 } 00:20:42.366 } 00:20:42.366 ]' 00:20:42.367 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.367 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.367 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.626 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.626 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.626 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.626 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.626 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.886 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:20:42.886 10:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:20:43.459 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.459 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:43.459 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.459 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.459 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.459 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.459 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.459 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.719 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:43.720 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.720 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.720 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:43.720 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:43.720 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.720 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.720 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.720 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.720 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.720 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.720 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.720 10:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.979 00:20:43.979 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.979 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.979 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.240 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.240 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.240 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.240 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.240 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.240 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.240 { 00:20:44.240 "cntlid": 83, 00:20:44.240 "qid": 0, 00:20:44.240 "state": "enabled", 00:20:44.240 "thread": "nvmf_tgt_poll_group_000", 00:20:44.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:44.240 "listen_address": { 00:20:44.240 "trtype": "TCP", 00:20:44.240 "adrfam": "IPv4", 00:20:44.240 "traddr": "10.0.0.2", 00:20:44.240 "trsvcid": "4420" 00:20:44.240 }, 00:20:44.240 "peer_address": { 00:20:44.240 "trtype": "TCP", 00:20:44.240 "adrfam": "IPv4", 00:20:44.240 "traddr": "10.0.0.1", 00:20:44.240 "trsvcid": "36408" 00:20:44.240 }, 00:20:44.240 "auth": { 00:20:44.240 "state": "completed", 00:20:44.240 "digest": "sha384", 00:20:44.240 "dhgroup": "ffdhe6144" 00:20:44.240 } 00:20:44.240 } 00:20:44.240 ]' 00:20:44.240 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.240 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.240 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.240 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.240 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.240 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.240 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.240 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.500 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:20:44.500 10:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:20:45.070 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.070 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:45.070 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.070 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.070 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.070 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.070 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.071 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.330 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:45.330 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.330 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.330 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:45.330 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:45.330 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.330 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.330 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.330 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.330 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.330 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.330 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.330 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.589 00:20:45.589 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.589 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.589 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.849 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.849 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.849 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.849 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.849 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.849 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.849 { 00:20:45.849 "cntlid": 85, 00:20:45.849 "qid": 0, 00:20:45.849 "state": "enabled", 00:20:45.849 "thread": "nvmf_tgt_poll_group_000", 00:20:45.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:45.849 "listen_address": { 00:20:45.849 "trtype": "TCP", 00:20:45.849 "adrfam": "IPv4", 00:20:45.849 "traddr": "10.0.0.2", 00:20:45.849 "trsvcid": "4420" 00:20:45.849 }, 00:20:45.849 "peer_address": { 00:20:45.849 "trtype": "TCP", 00:20:45.849 "adrfam": "IPv4", 00:20:45.849 "traddr": "10.0.0.1", 00:20:45.849 "trsvcid": "36422" 00:20:45.849 }, 00:20:45.849 "auth": { 00:20:45.849 "state": "completed", 00:20:45.849 "digest": "sha384", 00:20:45.849 "dhgroup": "ffdhe6144" 00:20:45.849 } 00:20:45.849 } 00:20:45.849 ]' 00:20:45.849 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.849 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.850 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.850 10:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.850 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.850 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.850 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.850 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.109 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:20:46.109 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:20:46.679 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.679 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:46.679 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.679 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.940 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.940 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.940 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.940 10:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.940 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:46.940 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.940 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.940 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:46.940 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:46.940 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.940 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:46.940 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.940 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.940 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.940 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:46.940 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.940 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.199 00:20:47.460 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.460 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.460 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.460 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.460 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.460 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.460 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.460 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.460 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.460 { 00:20:47.460 "cntlid": 87, 00:20:47.460 "qid": 0, 00:20:47.460 "state": "enabled", 00:20:47.460 "thread": "nvmf_tgt_poll_group_000", 00:20:47.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:47.460 "listen_address": { 00:20:47.460 "trtype": "TCP", 00:20:47.460 "adrfam": "IPv4", 00:20:47.460 "traddr": "10.0.0.2", 00:20:47.460 "trsvcid": "4420" 00:20:47.460 }, 00:20:47.460 "peer_address": { 00:20:47.460 "trtype": "TCP", 00:20:47.460 "adrfam": "IPv4", 00:20:47.460 "traddr": "10.0.0.1", 00:20:47.460 "trsvcid": "44286" 00:20:47.460 }, 00:20:47.460 "auth": { 00:20:47.460 "state": "completed", 00:20:47.460 "digest": "sha384", 00:20:47.460 "dhgroup": "ffdhe6144" 00:20:47.460 } 00:20:47.460 } 00:20:47.460 ]' 00:20:47.460 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.460 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.460 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.721 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:47.721 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.721 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.721 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.721 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.721 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:20:47.721 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.664 10:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.237 00:20:49.237 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.237 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.237 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.237 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.237 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.237 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.237 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.237 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.237 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.237 { 00:20:49.237 "cntlid": 89, 00:20:49.237 "qid": 0, 00:20:49.237 "state": "enabled", 00:20:49.237 "thread": "nvmf_tgt_poll_group_000", 00:20:49.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:49.237 "listen_address": { 00:20:49.237 "trtype": "TCP", 00:20:49.237 "adrfam": "IPv4", 00:20:49.237 "traddr": "10.0.0.2", 00:20:49.237 "trsvcid": "4420" 00:20:49.237 }, 00:20:49.237 "peer_address": { 00:20:49.237 "trtype": "TCP", 00:20:49.237 "adrfam": "IPv4", 00:20:49.237 "traddr": "10.0.0.1", 00:20:49.237 "trsvcid": "44328" 00:20:49.237 }, 00:20:49.237 "auth": { 00:20:49.237 "state": "completed", 00:20:49.237 "digest": "sha384", 00:20:49.237 "dhgroup": "ffdhe8192" 00:20:49.237 } 00:20:49.237 } 00:20:49.237 ]' 00:20:49.237 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.497 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.497 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.497 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:49.497 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.497 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.497 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.497 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.759 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:20:49.759 10:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:20:50.332 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.332 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:50.332 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.332 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.332 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.332 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.332 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.332 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.593 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:50.593 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.593 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.593 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:50.593 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:50.593 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.593 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.593 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.593 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.593 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.593 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.593 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.593 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.165 00:20:51.165 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.165 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.165 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.165 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.165 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.165 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.165 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.165 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.165 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.165 { 00:20:51.165 "cntlid": 91, 00:20:51.165 "qid": 0, 00:20:51.165 "state": "enabled", 00:20:51.165 "thread": "nvmf_tgt_poll_group_000", 00:20:51.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:51.165 "listen_address": { 00:20:51.165 "trtype": "TCP", 00:20:51.165 "adrfam": "IPv4", 00:20:51.165 "traddr": "10.0.0.2", 00:20:51.165 "trsvcid": "4420" 00:20:51.165 }, 00:20:51.165 "peer_address": { 00:20:51.165 "trtype": "TCP", 00:20:51.165 "adrfam": "IPv4", 00:20:51.165 "traddr": "10.0.0.1", 00:20:51.165 "trsvcid": "44356" 00:20:51.165 }, 00:20:51.165 "auth": { 00:20:51.165 "state": "completed", 00:20:51.165 "digest": "sha384", 00:20:51.165 "dhgroup": "ffdhe8192" 00:20:51.165 } 00:20:51.165 } 00:20:51.165 ]' 00:20:51.165 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.165 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.165 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.426 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.426 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.426 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.426 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.426 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.426 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:20:51.426 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.370 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.940 00:20:52.940 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.940 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.940 10:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.940 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.940 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.940 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.940 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.940 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.940 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.940 { 00:20:52.940 "cntlid": 93, 00:20:52.940 "qid": 0, 00:20:52.940 "state": "enabled", 00:20:52.940 "thread": "nvmf_tgt_poll_group_000", 00:20:52.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:52.940 "listen_address": { 00:20:52.940 "trtype": "TCP", 00:20:52.940 "adrfam": "IPv4", 00:20:52.940 "traddr": "10.0.0.2", 00:20:52.940 "trsvcid": "4420" 00:20:52.940 }, 00:20:52.940 "peer_address": { 00:20:52.941 "trtype": "TCP", 00:20:52.941 "adrfam": "IPv4", 00:20:52.941 "traddr": "10.0.0.1", 00:20:52.941 "trsvcid": "44378" 00:20:52.941 }, 00:20:52.941 "auth": { 00:20:52.941 "state": "completed", 00:20:52.941 "digest": "sha384", 00:20:52.941 "dhgroup": "ffdhe8192" 00:20:52.941 } 00:20:52.941 } 00:20:52.941 ]' 00:20:53.200 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.200 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.200 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.200 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.200 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.200 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.201 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.201 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.460 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:20:53.460 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:20:54.032 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.032 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:54.032 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.032 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.032 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.032 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.032 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.032 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.293 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:54.293 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.293 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.293 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:54.293 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:54.293 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.293 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:54.293 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.293 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.293 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.293 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:54.293 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.293 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.865 00:20:54.865 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.865 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.865 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.865 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.865 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.865 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.865 10:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.865 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.865 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.865 { 00:20:54.865 "cntlid": 95, 00:20:54.865 "qid": 0, 00:20:54.865 "state": "enabled", 00:20:54.865 "thread": "nvmf_tgt_poll_group_000", 00:20:54.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:54.865 "listen_address": { 00:20:54.865 "trtype": "TCP", 00:20:54.865 "adrfam": "IPv4", 00:20:54.865 "traddr": "10.0.0.2", 00:20:54.865 "trsvcid": "4420" 00:20:54.865 }, 00:20:54.865 "peer_address": { 00:20:54.865 "trtype": "TCP", 00:20:54.865 "adrfam": "IPv4", 00:20:54.865 "traddr": "10.0.0.1", 00:20:54.865 "trsvcid": "44404" 00:20:54.865 }, 00:20:54.865 "auth": { 00:20:54.865 "state": "completed", 00:20:54.865 "digest": "sha384", 00:20:54.865 "dhgroup": "ffdhe8192" 00:20:54.865 } 00:20:54.865 } 00:20:54.865 ]' 00:20:54.865 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.865 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.865 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.127 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.127 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.127 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.127 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.127 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.387 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:20:55.387 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:20:55.958 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.959 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:55.959 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.959 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.959 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.959 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:55.959 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.959 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.959 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:55.959 10:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:55.959 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:55.959 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.959 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:55.959 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:55.959 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:55.959 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.959 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.959 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.959 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.219 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.219 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.219 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.219 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.219 00:20:56.219 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.220 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.220 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.480 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.480 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.480 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.480 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.480 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.480 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.480 { 00:20:56.480 "cntlid": 97, 00:20:56.480 "qid": 0, 00:20:56.480 "state": "enabled", 00:20:56.480 "thread": "nvmf_tgt_poll_group_000", 00:20:56.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:56.480 "listen_address": { 00:20:56.480 "trtype": "TCP", 00:20:56.480 "adrfam": "IPv4", 00:20:56.480 "traddr": "10.0.0.2", 00:20:56.480 "trsvcid": "4420" 00:20:56.480 }, 00:20:56.480 "peer_address": { 00:20:56.480 "trtype": "TCP", 00:20:56.480 "adrfam": "IPv4", 00:20:56.480 "traddr": "10.0.0.1", 00:20:56.481 "trsvcid": "40496" 00:20:56.481 }, 00:20:56.481 "auth": { 00:20:56.481 "state": "completed", 00:20:56.481 "digest": "sha512", 00:20:56.481 "dhgroup": "null" 00:20:56.481 } 00:20:56.481 } 00:20:56.481 ]' 00:20:56.481 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.481 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.481 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.742 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:56.742 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.742 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.742 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.742 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.742 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:20:56.742 10:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.802 10:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.068 00:20:58.068 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.068 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.068 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.068 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.068 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.068 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.068 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.068 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.068 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.068 { 00:20:58.068 "cntlid": 99, 00:20:58.068 "qid": 0, 00:20:58.068 "state": "enabled", 00:20:58.068 "thread": "nvmf_tgt_poll_group_000", 00:20:58.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:58.068 "listen_address": { 00:20:58.068 "trtype": "TCP", 00:20:58.068 "adrfam": "IPv4", 00:20:58.068 "traddr": "10.0.0.2", 00:20:58.068 "trsvcid": "4420" 00:20:58.068 }, 00:20:58.068 "peer_address": { 00:20:58.068 "trtype": "TCP", 00:20:58.069 "adrfam": "IPv4", 00:20:58.069 "traddr": "10.0.0.1", 00:20:58.069 "trsvcid": "40530" 00:20:58.069 }, 00:20:58.069 "auth": { 00:20:58.069 "state": "completed", 00:20:58.069 "digest": "sha512", 00:20:58.069 "dhgroup": "null" 00:20:58.069 } 00:20:58.069 } 00:20:58.069 ]' 00:20:58.069 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.069 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.069 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.329 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:58.329 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.329 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.329 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.329 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.590 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:20:58.590 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:20:59.160 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.160 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:59.160 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.160 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.160 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.160 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.160 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.160 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.421 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:59.421 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.421 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:59.421 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:59.421 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:59.421 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.421 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.421 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.421 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.421 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.421 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.421 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.421 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.421 00:20:59.681 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.681 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.681 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.681 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.681 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.681 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.681 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.681 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.681 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.681 { 00:20:59.681 "cntlid": 101, 00:20:59.681 "qid": 0, 00:20:59.681 "state": "enabled", 00:20:59.681 "thread": "nvmf_tgt_poll_group_000", 00:20:59.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:59.681 "listen_address": { 00:20:59.681 "trtype": "TCP", 00:20:59.681 "adrfam": "IPv4", 00:20:59.681 "traddr": "10.0.0.2", 00:20:59.681 "trsvcid": "4420" 00:20:59.681 }, 00:20:59.681 "peer_address": { 00:20:59.681 "trtype": "TCP", 00:20:59.681 "adrfam": "IPv4", 00:20:59.681 "traddr": "10.0.0.1", 00:20:59.681 "trsvcid": "40554" 00:20:59.681 }, 00:20:59.681 "auth": { 00:20:59.681 "state": "completed", 00:20:59.681 "digest": "sha512", 00:20:59.681 "dhgroup": "null" 00:20:59.681 } 00:20:59.681 } 00:20:59.681 ]' 00:20:59.681 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.681 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.681 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.942 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:59.942 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.942 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.942 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.942 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.942 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:20:59.942 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:21:00.882 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.883 10:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:01.143 00:21:01.143 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.143 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.143 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.403 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.404 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.404 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.404 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.404 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.404 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.404 { 00:21:01.404 "cntlid": 103, 00:21:01.404 "qid": 0, 00:21:01.404 "state": "enabled", 00:21:01.404 "thread": "nvmf_tgt_poll_group_000", 00:21:01.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:01.404 "listen_address": { 00:21:01.404 "trtype": "TCP", 00:21:01.404 "adrfam": "IPv4", 00:21:01.404 "traddr": "10.0.0.2", 00:21:01.404 "trsvcid": "4420" 00:21:01.404 }, 00:21:01.404 "peer_address": { 00:21:01.404 "trtype": "TCP", 00:21:01.404 "adrfam": "IPv4", 00:21:01.404 "traddr": "10.0.0.1", 00:21:01.404 "trsvcid": "40580" 00:21:01.404 }, 00:21:01.404 "auth": { 00:21:01.404 "state": "completed", 00:21:01.404 "digest": "sha512", 00:21:01.404 "dhgroup": "null" 00:21:01.404 } 00:21:01.404 } 00:21:01.404 ]' 00:21:01.404 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.404 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.404 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.404 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:01.404 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.404 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.404 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.404 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.664 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:21:01.664 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:21:02.236 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.236 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:02.236 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.236 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.236 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.236 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.236 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.236 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:02.236 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:02.497 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:02.497 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.497 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:02.497 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:02.497 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:02.497 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.497 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.497 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.497 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.497 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.497 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.497 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.497 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.758 00:21:02.758 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.758 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.758 10:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.019 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.019 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.019 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.019 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.019 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.019 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.019 { 00:21:03.019 "cntlid": 105, 00:21:03.019 "qid": 0, 00:21:03.019 "state": "enabled", 00:21:03.019 "thread": "nvmf_tgt_poll_group_000", 00:21:03.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:03.019 "listen_address": { 00:21:03.019 "trtype": "TCP", 00:21:03.019 "adrfam": "IPv4", 00:21:03.019 "traddr": "10.0.0.2", 00:21:03.019 "trsvcid": "4420" 00:21:03.019 }, 00:21:03.019 "peer_address": { 00:21:03.019 "trtype": "TCP", 00:21:03.019 "adrfam": "IPv4", 00:21:03.019 "traddr": "10.0.0.1", 00:21:03.019 "trsvcid": "40610" 00:21:03.019 }, 00:21:03.019 "auth": { 00:21:03.019 "state": "completed", 00:21:03.019 "digest": "sha512", 00:21:03.019 "dhgroup": "ffdhe2048" 00:21:03.019 } 00:21:03.019 } 00:21:03.019 ]' 00:21:03.019 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.019 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.019 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.019 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:03.019 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.019 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.019 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.019 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.279 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:21:03.279 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:21:03.848 10:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.848 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:03.848 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.849 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.849 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.849 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.849 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:03.849 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.108 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:04.108 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.108 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:04.108 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:04.108 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:04.108 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.108 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.108 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.108 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.108 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.108 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.108 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.108 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.368 00:21:04.368 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.368 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.368 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.647 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.647 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.647 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.647 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.647 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.647 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.647 { 00:21:04.647 "cntlid": 107, 00:21:04.647 "qid": 0, 00:21:04.647 "state": "enabled", 00:21:04.647 "thread": "nvmf_tgt_poll_group_000", 00:21:04.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:04.647 "listen_address": { 00:21:04.647 "trtype": "TCP", 00:21:04.647 "adrfam": "IPv4", 00:21:04.647 "traddr": "10.0.0.2", 00:21:04.647 "trsvcid": "4420" 00:21:04.647 }, 00:21:04.647 "peer_address": { 00:21:04.647 "trtype": "TCP", 00:21:04.647 "adrfam": "IPv4", 00:21:04.647 "traddr": "10.0.0.1", 00:21:04.647 "trsvcid": "40638" 00:21:04.647 }, 00:21:04.647 "auth": { 00:21:04.647 "state": "completed", 00:21:04.647 "digest": "sha512", 00:21:04.647 "dhgroup": "ffdhe2048" 00:21:04.647 } 00:21:04.647 } 00:21:04.647 ]' 00:21:04.647 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.647 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.647 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.647 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:04.647 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.647 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.647 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.647 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.907 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:21:04.907 10:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:21:05.479 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.479 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:05.479 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.479 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.479 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.479 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.479 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.479 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.740 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:05.740 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.740 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.740 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:05.740 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:05.740 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.740 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.740 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.740 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.740 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.740 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.740 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.740 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.001 00:21:06.001 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.001 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.001 10:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.001 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.001 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.001 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.001 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.001 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.001 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.001 { 00:21:06.001 "cntlid": 109, 00:21:06.001 "qid": 0, 00:21:06.001 "state": "enabled", 00:21:06.001 "thread": "nvmf_tgt_poll_group_000", 00:21:06.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:06.001 "listen_address": { 00:21:06.001 "trtype": "TCP", 00:21:06.001 "adrfam": "IPv4", 00:21:06.001 "traddr": "10.0.0.2", 00:21:06.001 "trsvcid": "4420" 00:21:06.001 }, 00:21:06.001 "peer_address": { 00:21:06.001 "trtype": "TCP", 00:21:06.001 "adrfam": "IPv4", 00:21:06.001 "traddr": "10.0.0.1", 00:21:06.001 "trsvcid": "53192" 00:21:06.001 }, 00:21:06.001 "auth": { 00:21:06.001 "state": "completed", 00:21:06.001 "digest": "sha512", 00:21:06.001 "dhgroup": "ffdhe2048" 00:21:06.001 } 00:21:06.001 } 00:21:06.001 ]' 00:21:06.001 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.262 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.262 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.262 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:06.262 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.262 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.262 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.262 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.524 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:21:06.524 10:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:21:07.096 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.096 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:07.096 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.096 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.096 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.096 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.096 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:07.096 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:07.358 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:07.358 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.358 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.358 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:07.358 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:07.358 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.358 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:07.358 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.358 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.358 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.358 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:07.358 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.359 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.359 00:21:07.619 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.620 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.620 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.620 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.620 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.620 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.620 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.620 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.620 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.620 { 00:21:07.620 "cntlid": 111, 00:21:07.620 "qid": 0, 00:21:07.620 "state": "enabled", 00:21:07.620 "thread": "nvmf_tgt_poll_group_000", 00:21:07.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:07.620 "listen_address": { 00:21:07.620 "trtype": "TCP", 00:21:07.620 "adrfam": "IPv4", 00:21:07.620 "traddr": "10.0.0.2", 00:21:07.620 "trsvcid": "4420" 00:21:07.620 }, 00:21:07.620 "peer_address": { 00:21:07.620 "trtype": "TCP", 00:21:07.620 "adrfam": "IPv4", 00:21:07.620 "traddr": "10.0.0.1", 00:21:07.620 "trsvcid": "53212" 00:21:07.620 }, 00:21:07.620 "auth": { 00:21:07.620 "state": "completed", 00:21:07.620 "digest": "sha512", 00:21:07.620 "dhgroup": "ffdhe2048" 00:21:07.620 } 00:21:07.620 } 00:21:07.620 ]' 00:21:07.620 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.620 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.620 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.881 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:07.881 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.881 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.881 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.881 10:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.881 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:21:08.142 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.714 10:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.975 00:21:08.975 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.975 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.975 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.236 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.236 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.236 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.236 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.236 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.236 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.236 { 00:21:09.236 "cntlid": 113, 00:21:09.236 "qid": 0, 00:21:09.236 "state": "enabled", 00:21:09.236 "thread": "nvmf_tgt_poll_group_000", 00:21:09.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:09.236 "listen_address": { 00:21:09.236 "trtype": "TCP", 00:21:09.236 "adrfam": "IPv4", 00:21:09.236 "traddr": "10.0.0.2", 00:21:09.236 "trsvcid": "4420" 00:21:09.236 }, 00:21:09.236 "peer_address": { 00:21:09.236 "trtype": "TCP", 00:21:09.236 "adrfam": "IPv4", 00:21:09.236 "traddr": "10.0.0.1", 00:21:09.236 "trsvcid": "53246" 00:21:09.236 }, 00:21:09.236 "auth": { 00:21:09.236 "state": "completed", 00:21:09.236 "digest": "sha512", 00:21:09.236 "dhgroup": "ffdhe3072" 00:21:09.236 } 00:21:09.236 } 00:21:09.236 ]' 00:21:09.236 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.236 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.236 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.236 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:09.236 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.498 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.498 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.498 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.498 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:21:09.498 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:21:10.069 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.330 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.591 00:21:10.591 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.591 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.591 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.852 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.852 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.852 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.852 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.852 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.852 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.852 { 00:21:10.852 "cntlid": 115, 00:21:10.852 "qid": 0, 00:21:10.852 "state": "enabled", 00:21:10.852 "thread": "nvmf_tgt_poll_group_000", 00:21:10.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:10.852 "listen_address": { 00:21:10.852 "trtype": "TCP", 00:21:10.852 "adrfam": "IPv4", 00:21:10.852 "traddr": "10.0.0.2", 00:21:10.852 "trsvcid": "4420" 00:21:10.852 }, 00:21:10.852 "peer_address": { 00:21:10.852 "trtype": "TCP", 00:21:10.852 "adrfam": "IPv4", 00:21:10.852 "traddr": "10.0.0.1", 00:21:10.852 "trsvcid": "53274" 00:21:10.852 }, 00:21:10.852 "auth": { 00:21:10.852 "state": "completed", 00:21:10.852 "digest": "sha512", 00:21:10.852 "dhgroup": "ffdhe3072" 00:21:10.852 } 00:21:10.852 } 00:21:10.852 ]' 00:21:10.852 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.852 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.852 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.852 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:10.852 10:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.852 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.852 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.852 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.113 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:21:11.113 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:21:11.685 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.947 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:11.947 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.947 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.947 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.947 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.947 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.947 10:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.947 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:11.947 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.947 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.947 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:11.947 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:11.947 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.947 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.947 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.947 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.947 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.947 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.947 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.947 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.208 00:21:12.208 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.208 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.208 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.469 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.469 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.469 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.469 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.469 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.469 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.469 { 00:21:12.469 "cntlid": 117, 00:21:12.469 "qid": 0, 00:21:12.469 "state": "enabled", 00:21:12.469 "thread": "nvmf_tgt_poll_group_000", 00:21:12.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:12.469 "listen_address": { 00:21:12.469 "trtype": "TCP", 00:21:12.469 "adrfam": "IPv4", 00:21:12.469 "traddr": "10.0.0.2", 00:21:12.469 "trsvcid": "4420" 00:21:12.469 }, 00:21:12.469 "peer_address": { 00:21:12.469 "trtype": "TCP", 00:21:12.469 "adrfam": "IPv4", 00:21:12.469 "traddr": "10.0.0.1", 00:21:12.469 "trsvcid": "53300" 00:21:12.469 }, 00:21:12.469 "auth": { 00:21:12.469 "state": "completed", 00:21:12.469 "digest": "sha512", 00:21:12.469 "dhgroup": "ffdhe3072" 00:21:12.469 } 00:21:12.469 } 00:21:12.469 ]' 00:21:12.469 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.469 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.469 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.469 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:12.469 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.469 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.469 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.469 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.730 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:21:12.730 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:21:13.301 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.562 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.824 00:21:13.824 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.824 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.824 10:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.085 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.085 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.085 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.085 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.085 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.085 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.085 { 00:21:14.085 "cntlid": 119, 00:21:14.085 "qid": 0, 00:21:14.085 "state": "enabled", 00:21:14.085 "thread": "nvmf_tgt_poll_group_000", 00:21:14.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:14.085 "listen_address": { 00:21:14.085 "trtype": "TCP", 00:21:14.085 "adrfam": "IPv4", 00:21:14.085 "traddr": "10.0.0.2", 00:21:14.085 "trsvcid": "4420" 00:21:14.085 }, 00:21:14.085 "peer_address": { 00:21:14.085 "trtype": "TCP", 00:21:14.085 "adrfam": "IPv4", 00:21:14.085 "traddr": "10.0.0.1", 00:21:14.085 "trsvcid": "53328" 00:21:14.085 }, 00:21:14.085 "auth": { 00:21:14.085 "state": "completed", 00:21:14.085 "digest": "sha512", 00:21:14.085 "dhgroup": "ffdhe3072" 00:21:14.085 } 00:21:14.085 } 00:21:14.085 ]' 00:21:14.085 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.085 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.085 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.085 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:14.085 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.085 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.085 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.085 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.346 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:21:14.347 10:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:21:14.917 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.917 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:14.917 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.917 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.917 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.917 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.918 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.918 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:14.918 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.178 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:15.178 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.178 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.178 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:15.178 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:15.178 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.178 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.178 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.178 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.178 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.178 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.178 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.178 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.439 00:21:15.439 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.439 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.439 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.700 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.700 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.700 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.700 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.700 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.700 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.700 { 00:21:15.700 "cntlid": 121, 00:21:15.700 "qid": 0, 00:21:15.700 "state": "enabled", 00:21:15.700 "thread": "nvmf_tgt_poll_group_000", 00:21:15.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:15.700 "listen_address": { 00:21:15.700 "trtype": "TCP", 00:21:15.700 "adrfam": "IPv4", 00:21:15.700 "traddr": "10.0.0.2", 00:21:15.700 "trsvcid": "4420" 00:21:15.700 }, 00:21:15.700 "peer_address": { 00:21:15.700 "trtype": "TCP", 00:21:15.700 "adrfam": "IPv4", 00:21:15.700 "traddr": "10.0.0.1", 00:21:15.700 "trsvcid": "53362" 00:21:15.700 }, 00:21:15.700 "auth": { 00:21:15.700 "state": "completed", 00:21:15.700 "digest": "sha512", 00:21:15.700 "dhgroup": "ffdhe4096" 00:21:15.700 } 00:21:15.700 } 00:21:15.700 ]' 00:21:15.700 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.700 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.700 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.700 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:15.700 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.700 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.700 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.700 10:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.961 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:21:15.961 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:21:16.532 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.532 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:16.532 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.532 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.532 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.532 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.532 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:16.532 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:16.793 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:16.793 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.793 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:16.793 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:16.793 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:16.793 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.793 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.793 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.793 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.793 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.793 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.793 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.793 10:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.054 00:21:17.054 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.054 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.054 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.315 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.315 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.315 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.315 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.315 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.315 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.315 { 00:21:17.315 "cntlid": 123, 00:21:17.315 "qid": 0, 00:21:17.315 "state": "enabled", 00:21:17.315 "thread": "nvmf_tgt_poll_group_000", 00:21:17.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:17.315 "listen_address": { 00:21:17.315 "trtype": "TCP", 00:21:17.315 "adrfam": "IPv4", 00:21:17.315 "traddr": "10.0.0.2", 00:21:17.315 "trsvcid": "4420" 00:21:17.315 }, 00:21:17.315 "peer_address": { 00:21:17.315 "trtype": "TCP", 00:21:17.315 "adrfam": "IPv4", 00:21:17.315 "traddr": "10.0.0.1", 00:21:17.315 "trsvcid": "42832" 00:21:17.315 }, 00:21:17.315 "auth": { 00:21:17.315 "state": "completed", 00:21:17.315 "digest": "sha512", 00:21:17.315 "dhgroup": "ffdhe4096" 00:21:17.315 } 00:21:17.315 } 00:21:17.315 ]' 00:21:17.315 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.315 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.315 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.315 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:17.315 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.315 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.315 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.315 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.575 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:21:17.575 10:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:21:18.145 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.145 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:18.145 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.145 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.145 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.145 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.145 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.145 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.406 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:18.406 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.406 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.406 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:18.406 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:18.406 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.406 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.406 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.406 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.406 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.406 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.406 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.406 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.667 00:21:18.667 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.667 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.667 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.929 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.929 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.929 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.929 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.929 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.929 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.929 { 00:21:18.929 "cntlid": 125, 00:21:18.929 "qid": 0, 00:21:18.929 "state": "enabled", 00:21:18.929 "thread": "nvmf_tgt_poll_group_000", 00:21:18.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:18.929 "listen_address": { 00:21:18.929 "trtype": "TCP", 00:21:18.929 "adrfam": "IPv4", 00:21:18.929 "traddr": "10.0.0.2", 00:21:18.929 "trsvcid": "4420" 00:21:18.929 }, 00:21:18.929 "peer_address": { 00:21:18.929 "trtype": "TCP", 00:21:18.929 "adrfam": "IPv4", 00:21:18.929 "traddr": "10.0.0.1", 00:21:18.929 "trsvcid": "42862" 00:21:18.929 }, 00:21:18.929 "auth": { 00:21:18.929 "state": "completed", 00:21:18.929 "digest": "sha512", 00:21:18.929 "dhgroup": "ffdhe4096" 00:21:18.929 } 00:21:18.929 } 00:21:18.929 ]' 00:21:18.929 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.929 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.929 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.929 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:18.929 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.929 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.929 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.929 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.189 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:21:19.189 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:21:19.760 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.020 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:20.020 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.020 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.020 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.020 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.020 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:20.020 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:20.020 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:20.020 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.020 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.020 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:20.021 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:20.021 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.021 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:20.021 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.021 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.021 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.021 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:20.021 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.021 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.280 00:21:20.280 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.280 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.280 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.540 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.540 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.540 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.540 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.540 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.540 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.540 { 00:21:20.540 "cntlid": 127, 00:21:20.540 "qid": 0, 00:21:20.540 "state": "enabled", 00:21:20.540 "thread": "nvmf_tgt_poll_group_000", 00:21:20.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:20.540 "listen_address": { 00:21:20.540 "trtype": "TCP", 00:21:20.540 "adrfam": "IPv4", 00:21:20.540 "traddr": "10.0.0.2", 00:21:20.540 "trsvcid": "4420" 00:21:20.541 }, 00:21:20.541 "peer_address": { 00:21:20.541 "trtype": "TCP", 00:21:20.541 "adrfam": "IPv4", 00:21:20.541 "traddr": "10.0.0.1", 00:21:20.541 "trsvcid": "42894" 00:21:20.541 }, 00:21:20.541 "auth": { 00:21:20.541 "state": "completed", 00:21:20.541 "digest": "sha512", 00:21:20.541 "dhgroup": "ffdhe4096" 00:21:20.541 } 00:21:20.541 } 00:21:20.541 ]' 00:21:20.541 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.541 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.541 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.541 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:20.541 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.801 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.801 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.801 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.801 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:21:20.801 10:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:21:21.373 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.633 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:21.633 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.633 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.633 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.633 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:21.633 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.633 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:21.633 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:21.633 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:21.633 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.634 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.634 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:21.634 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:21.634 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.634 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.634 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.634 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.634 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.634 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.634 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.634 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.204 00:21:22.204 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.204 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.204 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.204 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.204 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.204 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.204 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.204 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.204 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.204 { 00:21:22.204 "cntlid": 129, 00:21:22.204 "qid": 0, 00:21:22.204 "state": "enabled", 00:21:22.204 "thread": "nvmf_tgt_poll_group_000", 00:21:22.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:22.204 "listen_address": { 00:21:22.204 "trtype": "TCP", 00:21:22.204 "adrfam": "IPv4", 00:21:22.204 "traddr": "10.0.0.2", 00:21:22.204 "trsvcid": "4420" 00:21:22.204 }, 00:21:22.204 "peer_address": { 00:21:22.204 "trtype": "TCP", 00:21:22.204 "adrfam": "IPv4", 00:21:22.204 "traddr": "10.0.0.1", 00:21:22.204 "trsvcid": "42912" 00:21:22.204 }, 00:21:22.204 "auth": { 00:21:22.204 "state": "completed", 00:21:22.204 "digest": "sha512", 00:21:22.204 "dhgroup": "ffdhe6144" 00:21:22.204 } 00:21:22.204 } 00:21:22.204 ]' 00:21:22.204 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.204 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.204 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.465 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:22.465 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.465 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.465 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.465 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.465 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:21:22.465 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:21:23.405 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.406 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.666 00:21:23.666 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.666 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.666 10:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.926 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.926 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.926 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.926 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.926 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.926 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.926 { 00:21:23.926 "cntlid": 131, 00:21:23.926 "qid": 0, 00:21:23.926 "state": "enabled", 00:21:23.926 "thread": "nvmf_tgt_poll_group_000", 00:21:23.926 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:23.926 "listen_address": { 00:21:23.926 "trtype": "TCP", 00:21:23.926 "adrfam": "IPv4", 00:21:23.926 "traddr": "10.0.0.2", 00:21:23.926 "trsvcid": "4420" 00:21:23.926 }, 00:21:23.926 "peer_address": { 00:21:23.926 "trtype": "TCP", 00:21:23.926 "adrfam": "IPv4", 00:21:23.926 "traddr": "10.0.0.1", 00:21:23.926 "trsvcid": "42932" 00:21:23.926 }, 00:21:23.926 "auth": { 00:21:23.926 "state": "completed", 00:21:23.926 "digest": "sha512", 00:21:23.926 "dhgroup": "ffdhe6144" 00:21:23.927 } 00:21:23.927 } 00:21:23.927 ]' 00:21:23.927 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.927 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.927 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.186 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.186 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.186 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.186 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.186 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.187 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:21:24.187 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:21:25.131 10:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.131 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.392 00:21:25.392 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.392 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.392 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.653 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.653 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.653 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.653 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.653 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.653 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.653 { 00:21:25.653 "cntlid": 133, 00:21:25.653 "qid": 0, 00:21:25.653 "state": "enabled", 00:21:25.653 "thread": "nvmf_tgt_poll_group_000", 00:21:25.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:25.653 "listen_address": { 00:21:25.653 "trtype": "TCP", 00:21:25.653 "adrfam": "IPv4", 00:21:25.653 "traddr": "10.0.0.2", 00:21:25.653 "trsvcid": "4420" 00:21:25.653 }, 00:21:25.653 "peer_address": { 00:21:25.653 "trtype": "TCP", 00:21:25.653 "adrfam": "IPv4", 00:21:25.653 "traddr": "10.0.0.1", 00:21:25.653 "trsvcid": "42948" 00:21:25.653 }, 00:21:25.653 "auth": { 00:21:25.653 "state": "completed", 00:21:25.653 "digest": "sha512", 00:21:25.653 "dhgroup": "ffdhe6144" 00:21:25.653 } 00:21:25.653 } 00:21:25.653 ]' 00:21:25.653 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.653 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.653 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.653 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:25.653 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.914 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.914 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.914 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.914 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:21:25.914 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.858 10:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.119 00:21:27.119 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.119 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.119 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.379 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.379 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.379 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.379 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.379 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.379 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.379 { 00:21:27.379 "cntlid": 135, 00:21:27.379 "qid": 0, 00:21:27.379 "state": "enabled", 00:21:27.379 "thread": "nvmf_tgt_poll_group_000", 00:21:27.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:27.379 "listen_address": { 00:21:27.379 "trtype": "TCP", 00:21:27.379 "adrfam": "IPv4", 00:21:27.379 "traddr": "10.0.0.2", 00:21:27.379 "trsvcid": "4420" 00:21:27.379 }, 00:21:27.379 "peer_address": { 00:21:27.379 "trtype": "TCP", 00:21:27.379 "adrfam": "IPv4", 00:21:27.379 "traddr": "10.0.0.1", 00:21:27.379 "trsvcid": "36280" 00:21:27.379 }, 00:21:27.379 "auth": { 00:21:27.379 "state": "completed", 00:21:27.379 "digest": "sha512", 00:21:27.379 "dhgroup": "ffdhe6144" 00:21:27.379 } 00:21:27.379 } 00:21:27.379 ]' 00:21:27.379 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.379 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.379 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.379 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.379 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.640 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.640 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.640 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.640 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:21:27.640 10:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:21:28.582 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.583 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.153 00:21:29.153 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.153 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.153 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.153 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.153 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.153 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.153 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.153 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.153 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.153 { 00:21:29.153 "cntlid": 137, 00:21:29.153 "qid": 0, 00:21:29.153 "state": "enabled", 00:21:29.153 "thread": "nvmf_tgt_poll_group_000", 00:21:29.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:29.153 "listen_address": { 00:21:29.153 "trtype": "TCP", 00:21:29.153 "adrfam": "IPv4", 00:21:29.153 "traddr": "10.0.0.2", 00:21:29.153 "trsvcid": "4420" 00:21:29.153 }, 00:21:29.153 "peer_address": { 00:21:29.153 "trtype": "TCP", 00:21:29.153 "adrfam": "IPv4", 00:21:29.154 "traddr": "10.0.0.1", 00:21:29.154 "trsvcid": "36308" 00:21:29.154 }, 00:21:29.154 "auth": { 00:21:29.154 "state": "completed", 00:21:29.154 "digest": "sha512", 00:21:29.154 "dhgroup": "ffdhe8192" 00:21:29.154 } 00:21:29.154 } 00:21:29.154 ]' 00:21:29.154 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.415 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.415 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.415 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:29.415 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.415 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.415 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.415 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.675 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:21:29.675 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:21:30.255 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.255 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:30.255 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.255 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.255 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.255 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.255 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.255 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.517 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:30.517 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.517 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.517 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:30.517 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:30.517 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.517 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.517 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.517 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.517 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.517 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.517 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.517 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.778 00:21:30.778 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.778 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.778 10:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.038 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.038 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.038 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.038 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.038 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.038 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.038 { 00:21:31.038 "cntlid": 139, 00:21:31.038 "qid": 0, 00:21:31.038 "state": "enabled", 00:21:31.038 "thread": "nvmf_tgt_poll_group_000", 00:21:31.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:31.038 "listen_address": { 00:21:31.038 "trtype": "TCP", 00:21:31.038 "adrfam": "IPv4", 00:21:31.038 "traddr": "10.0.0.2", 00:21:31.038 "trsvcid": "4420" 00:21:31.038 }, 00:21:31.038 "peer_address": { 00:21:31.038 "trtype": "TCP", 00:21:31.038 "adrfam": "IPv4", 00:21:31.038 "traddr": "10.0.0.1", 00:21:31.038 "trsvcid": "36328" 00:21:31.038 }, 00:21:31.038 "auth": { 00:21:31.038 "state": "completed", 00:21:31.038 "digest": "sha512", 00:21:31.038 "dhgroup": "ffdhe8192" 00:21:31.038 } 00:21:31.038 } 00:21:31.038 ]' 00:21:31.038 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.038 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.038 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:21:31.299 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: --dhchap-ctrl-secret DHHC-1:02:OGZmZGMzZjNmYzJkYThhOTUxODJjOWM3ZDk2NTU2YjkzZWM1NzgxNDBmNTYwNTk3eYMe9A==: 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.242 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.814 00:21:32.814 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.814 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.814 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.814 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.814 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.814 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.814 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.076 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.076 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.076 { 00:21:33.076 "cntlid": 141, 00:21:33.076 "qid": 0, 00:21:33.076 "state": "enabled", 00:21:33.076 "thread": "nvmf_tgt_poll_group_000", 00:21:33.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:33.076 "listen_address": { 00:21:33.076 "trtype": "TCP", 00:21:33.076 "adrfam": "IPv4", 00:21:33.076 "traddr": "10.0.0.2", 00:21:33.076 "trsvcid": "4420" 00:21:33.076 }, 00:21:33.076 "peer_address": { 00:21:33.076 "trtype": "TCP", 00:21:33.076 "adrfam": "IPv4", 00:21:33.076 "traddr": "10.0.0.1", 00:21:33.076 "trsvcid": "36350" 00:21:33.076 }, 00:21:33.076 "auth": { 00:21:33.076 "state": "completed", 00:21:33.076 "digest": "sha512", 00:21:33.076 "dhgroup": "ffdhe8192" 00:21:33.076 } 00:21:33.076 } 00:21:33.076 ]' 00:21:33.076 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.076 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.076 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.076 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.076 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.076 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.076 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.076 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.337 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:21:33.337 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:01:OWUxOGY0ZThjNGZjNWNlMzQxODMxZWFlNGIxYTY3ZjhrKgBr: 00:21:33.909 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.909 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:33.909 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.909 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.909 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.909 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.909 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:33.909 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:34.169 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:34.169 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.169 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.169 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:34.169 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:34.169 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.169 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:34.169 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.169 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.169 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.169 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:34.169 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.169 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.740 00:21:34.740 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.740 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.740 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.740 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.740 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.741 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.741 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.741 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.741 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.741 { 00:21:34.741 "cntlid": 143, 00:21:34.741 "qid": 0, 00:21:34.741 "state": "enabled", 00:21:34.741 "thread": "nvmf_tgt_poll_group_000", 00:21:34.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:34.741 "listen_address": { 00:21:34.741 "trtype": "TCP", 00:21:34.741 "adrfam": "IPv4", 00:21:34.741 "traddr": "10.0.0.2", 00:21:34.741 "trsvcid": "4420" 00:21:34.741 }, 00:21:34.741 "peer_address": { 00:21:34.741 "trtype": "TCP", 00:21:34.741 "adrfam": "IPv4", 00:21:34.741 "traddr": "10.0.0.1", 00:21:34.741 "trsvcid": "36384" 00:21:34.741 }, 00:21:34.741 "auth": { 00:21:34.741 "state": "completed", 00:21:34.741 "digest": "sha512", 00:21:34.741 "dhgroup": "ffdhe8192" 00:21:34.741 } 00:21:34.741 } 00:21:34.741 ]' 00:21:34.741 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.741 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.741 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.002 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.002 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.002 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.002 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.002 10:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.002 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:21:35.002 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:21:35.943 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.943 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:35.943 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.943 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.943 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.943 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:35.943 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:35.943 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:35.943 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:35.943 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:35.943 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:35.943 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:35.943 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.943 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.943 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:35.943 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:35.943 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.943 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.943 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.943 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.943 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.943 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.943 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.943 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.552 00:21:36.552 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.552 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.552 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.552 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.552 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.552 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.552 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.552 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.552 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.552 { 00:21:36.552 "cntlid": 145, 00:21:36.552 "qid": 0, 00:21:36.552 "state": "enabled", 00:21:36.552 "thread": "nvmf_tgt_poll_group_000", 00:21:36.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:36.552 "listen_address": { 00:21:36.552 "trtype": "TCP", 00:21:36.552 "adrfam": "IPv4", 00:21:36.552 "traddr": "10.0.0.2", 00:21:36.552 "trsvcid": "4420" 00:21:36.552 }, 00:21:36.552 "peer_address": { 00:21:36.552 "trtype": "TCP", 00:21:36.552 "adrfam": "IPv4", 00:21:36.552 "traddr": "10.0.0.1", 00:21:36.552 "trsvcid": "47632" 00:21:36.552 }, 00:21:36.552 "auth": { 00:21:36.552 "state": "completed", 00:21:36.553 "digest": "sha512", 00:21:36.553 "dhgroup": "ffdhe8192" 00:21:36.553 } 00:21:36.553 } 00:21:36.553 ]' 00:21:36.553 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.813 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.813 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.813 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:36.813 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.813 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.813 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.813 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.073 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:21:37.073 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ODk1YzU4NTAzOGI2NDQ4YWI2MTg4OTJjZmQxZDVlYjBhOTNmODlmNjg3YzZlMDc4mLlwag==: --dhchap-ctrl-secret DHHC-1:03:YjFiNTlkZjYxMWJjZTU5OTMyOThmZGE1YzYxNTUzNzA5MGE5ZjIyNThmY2QyMTQxZTc2MGRkMGRiN2NjNTdhZBWMAMo=: 00:21:37.644 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.644 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:37.644 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.644 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.644 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.644 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:37.644 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.644 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.644 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.644 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:37.644 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:37.644 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:37.644 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:37.644 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.645 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:37.645 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.645 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:37.645 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:37.645 10:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:38.216 request: 00:21:38.216 { 00:21:38.216 "name": "nvme0", 00:21:38.216 "trtype": "tcp", 00:21:38.216 "traddr": "10.0.0.2", 00:21:38.216 "adrfam": "ipv4", 00:21:38.216 "trsvcid": "4420", 00:21:38.216 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:38.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:38.216 "prchk_reftag": false, 00:21:38.216 "prchk_guard": false, 00:21:38.216 "hdgst": false, 00:21:38.216 "ddgst": false, 00:21:38.216 "dhchap_key": "key2", 00:21:38.216 "allow_unrecognized_csi": false, 00:21:38.216 "method": "bdev_nvme_attach_controller", 00:21:38.216 "req_id": 1 00:21:38.216 } 00:21:38.216 Got JSON-RPC error response 00:21:38.216 response: 00:21:38.216 { 00:21:38.216 "code": -5, 00:21:38.216 "message": "Input/output error" 00:21:38.216 } 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:38.216 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:38.478 request: 00:21:38.478 { 00:21:38.478 "name": "nvme0", 00:21:38.478 "trtype": "tcp", 00:21:38.478 "traddr": "10.0.0.2", 00:21:38.478 "adrfam": "ipv4", 00:21:38.478 "trsvcid": "4420", 00:21:38.478 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:38.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:38.478 "prchk_reftag": false, 00:21:38.478 "prchk_guard": false, 00:21:38.478 "hdgst": false, 00:21:38.478 "ddgst": false, 00:21:38.478 "dhchap_key": "key1", 00:21:38.478 "dhchap_ctrlr_key": "ckey2", 00:21:38.478 "allow_unrecognized_csi": false, 00:21:38.478 "method": "bdev_nvme_attach_controller", 00:21:38.478 "req_id": 1 00:21:38.478 } 00:21:38.478 Got JSON-RPC error response 00:21:38.478 response: 00:21:38.478 { 00:21:38.478 "code": -5, 00:21:38.478 "message": "Input/output error" 00:21:38.478 } 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.478 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.051 request: 00:21:39.051 { 00:21:39.051 "name": "nvme0", 00:21:39.051 "trtype": "tcp", 00:21:39.051 "traddr": "10.0.0.2", 00:21:39.051 "adrfam": "ipv4", 00:21:39.051 "trsvcid": "4420", 00:21:39.051 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:39.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:39.051 "prchk_reftag": false, 00:21:39.051 "prchk_guard": false, 00:21:39.051 "hdgst": false, 00:21:39.051 "ddgst": false, 00:21:39.051 "dhchap_key": "key1", 00:21:39.051 "dhchap_ctrlr_key": "ckey1", 00:21:39.051 "allow_unrecognized_csi": false, 00:21:39.051 "method": "bdev_nvme_attach_controller", 00:21:39.051 "req_id": 1 00:21:39.051 } 00:21:39.051 Got JSON-RPC error response 00:21:39.051 response: 00:21:39.051 { 00:21:39.051 "code": -5, 00:21:39.051 "message": "Input/output error" 00:21:39.051 } 00:21:39.051 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:39.051 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:39.051 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:39.051 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:39.051 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:39.051 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.051 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.052 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.052 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 994904 00:21:39.052 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 994904 ']' 00:21:39.052 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 994904 00:21:39.052 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:39.052 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:39.052 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 994904 00:21:39.052 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:39.052 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:39.052 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 994904' 00:21:39.052 killing process with pid 994904 00:21:39.052 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 994904 00:21:39.052 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 994904 00:21:39.312 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:39.312 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:39.312 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:39.312 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.312 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1021206 00:21:39.312 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1021206 00:21:39.312 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:39.312 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1021206 ']' 00:21:39.312 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.312 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.312 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.312 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.312 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.251 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.251 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:40.251 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.251 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.251 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.251 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.251 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:40.251 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1021206 00:21:40.251 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1021206 ']' 00:21:40.251 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.251 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.251 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.251 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.251 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.251 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.251 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:40.251 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:40.251 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.251 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.251 null0 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.YfY 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.COM ]] 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.COM 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dK6 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.HwY ]] 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HwY 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lUh 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.0Vr ]] 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0Vr 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.lVu 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.512 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.083 nvme0n1 00:21:41.343 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.343 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.343 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.343 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.343 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.343 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.343 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.343 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.343 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.343 { 00:21:41.343 "cntlid": 1, 00:21:41.343 "qid": 0, 00:21:41.343 "state": "enabled", 00:21:41.343 "thread": "nvmf_tgt_poll_group_000", 00:21:41.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:41.343 "listen_address": { 00:21:41.343 "trtype": "TCP", 00:21:41.343 "adrfam": "IPv4", 00:21:41.343 "traddr": "10.0.0.2", 00:21:41.343 "trsvcid": "4420" 00:21:41.343 }, 00:21:41.343 "peer_address": { 00:21:41.343 "trtype": "TCP", 00:21:41.343 "adrfam": "IPv4", 00:21:41.343 "traddr": "10.0.0.1", 00:21:41.343 "trsvcid": "47674" 00:21:41.343 }, 00:21:41.343 "auth": { 00:21:41.343 "state": "completed", 00:21:41.343 "digest": "sha512", 00:21:41.344 "dhgroup": "ffdhe8192" 00:21:41.344 } 00:21:41.344 } 00:21:41.344 ]' 00:21:41.344 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.344 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.344 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.604 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:41.604 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.604 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.604 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.604 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.864 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:21:41.864 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:21:42.434 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.434 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:42.434 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.434 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.434 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.434 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:42.434 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.434 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.434 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.434 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:42.434 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:42.695 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:42.695 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:42.695 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:42.695 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:42.695 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.695 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:42.695 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.695 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:42.695 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.695 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.695 request: 00:21:42.695 { 00:21:42.695 "name": "nvme0", 00:21:42.695 "trtype": "tcp", 00:21:42.695 "traddr": "10.0.0.2", 00:21:42.695 "adrfam": "ipv4", 00:21:42.695 "trsvcid": "4420", 00:21:42.695 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:42.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:42.695 "prchk_reftag": false, 00:21:42.695 "prchk_guard": false, 00:21:42.695 "hdgst": false, 00:21:42.695 "ddgst": false, 00:21:42.695 "dhchap_key": "key3", 00:21:42.695 "allow_unrecognized_csi": false, 00:21:42.695 "method": "bdev_nvme_attach_controller", 00:21:42.695 "req_id": 1 00:21:42.695 } 00:21:42.695 Got JSON-RPC error response 00:21:42.695 response: 00:21:42.695 { 00:21:42.695 "code": -5, 00:21:42.695 "message": "Input/output error" 00:21:42.695 } 00:21:42.695 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:42.695 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:42.695 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:42.695 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:42.695 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:42.695 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:42.695 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:42.695 10:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:42.956 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:42.956 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:42.956 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:42.956 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:42.956 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.956 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:42.956 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.956 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:42.956 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.956 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.216 request: 00:21:43.216 { 00:21:43.216 "name": "nvme0", 00:21:43.216 "trtype": "tcp", 00:21:43.216 "traddr": "10.0.0.2", 00:21:43.216 "adrfam": "ipv4", 00:21:43.216 "trsvcid": "4420", 00:21:43.216 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:43.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:43.216 "prchk_reftag": false, 00:21:43.216 "prchk_guard": false, 00:21:43.216 "hdgst": false, 00:21:43.216 "ddgst": false, 00:21:43.216 "dhchap_key": "key3", 00:21:43.216 "allow_unrecognized_csi": false, 00:21:43.216 "method": "bdev_nvme_attach_controller", 00:21:43.216 "req_id": 1 00:21:43.216 } 00:21:43.216 Got JSON-RPC error response 00:21:43.216 response: 00:21:43.216 { 00:21:43.216 "code": -5, 00:21:43.216 "message": "Input/output error" 00:21:43.216 } 00:21:43.216 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:43.216 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:43.216 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:43.216 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:43.216 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:43.216 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:43.216 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:43.216 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:43.216 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:43.217 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:43.217 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:43.217 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.217 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.217 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.217 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:43.217 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.217 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.217 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.217 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:43.217 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:43.217 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:43.217 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:43.478 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.478 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:43.478 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.478 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:43.478 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:43.478 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:43.739 request: 00:21:43.739 { 00:21:43.739 "name": "nvme0", 00:21:43.739 "trtype": "tcp", 00:21:43.739 "traddr": "10.0.0.2", 00:21:43.739 "adrfam": "ipv4", 00:21:43.739 "trsvcid": "4420", 00:21:43.739 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:43.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:43.739 "prchk_reftag": false, 00:21:43.739 "prchk_guard": false, 00:21:43.739 "hdgst": false, 00:21:43.739 "ddgst": false, 00:21:43.739 "dhchap_key": "key0", 00:21:43.739 "dhchap_ctrlr_key": "key1", 00:21:43.739 "allow_unrecognized_csi": false, 00:21:43.739 "method": "bdev_nvme_attach_controller", 00:21:43.739 "req_id": 1 00:21:43.739 } 00:21:43.739 Got JSON-RPC error response 00:21:43.739 response: 00:21:43.739 { 00:21:43.739 "code": -5, 00:21:43.739 "message": "Input/output error" 00:21:43.739 } 00:21:43.739 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:43.739 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:43.739 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:43.739 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:43.739 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:43.739 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:43.739 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:44.000 nvme0n1 00:21:44.000 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:44.000 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:44.000 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.000 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.000 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.000 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.261 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:44.261 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.261 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.261 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.261 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:44.261 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:44.261 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:45.203 nvme0n1 00:21:45.203 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:45.203 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:45.203 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.203 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.203 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:45.203 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.203 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.203 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.203 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:45.203 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:45.203 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.465 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.465 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:21:45.465 10:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: --dhchap-ctrl-secret DHHC-1:03:ZTc0NTljZjg2MjhjNzYwYmZlNTU2YjJlYjY2ODg0ZTk1M2RjYzk2NWNjYjE5ODU4OWI3N2IxZTUzMmY4YjcwZfOg0cs=: 00:21:46.037 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:46.037 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:46.037 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:46.037 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:46.037 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:46.037 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:46.037 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:46.037 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.037 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.298 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:46.298 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:46.298 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:46.298 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:46.298 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.298 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:46.298 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.298 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:46.298 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:46.298 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:46.559 request: 00:21:46.559 { 00:21:46.559 "name": "nvme0", 00:21:46.559 "trtype": "tcp", 00:21:46.559 "traddr": "10.0.0.2", 00:21:46.559 "adrfam": "ipv4", 00:21:46.559 "trsvcid": "4420", 00:21:46.559 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:46.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:46.559 "prchk_reftag": false, 00:21:46.559 "prchk_guard": false, 00:21:46.559 "hdgst": false, 00:21:46.559 "ddgst": false, 00:21:46.559 "dhchap_key": "key1", 00:21:46.559 "allow_unrecognized_csi": false, 00:21:46.559 "method": "bdev_nvme_attach_controller", 00:21:46.559 "req_id": 1 00:21:46.559 } 00:21:46.559 Got JSON-RPC error response 00:21:46.559 response: 00:21:46.559 { 00:21:46.559 "code": -5, 00:21:46.559 "message": "Input/output error" 00:21:46.559 } 00:21:46.559 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:46.559 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:46.559 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:46.559 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:46.559 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:46.559 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:46.559 10:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:47.502 nvme0n1 00:21:47.502 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:47.502 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:47.502 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.502 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.502 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.502 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.763 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:47.763 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.763 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.763 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.763 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:47.763 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:47.763 10:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:48.023 nvme0n1 00:21:48.023 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:48.023 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.023 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:48.284 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.284 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.284 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.545 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:48.545 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.545 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.545 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.545 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: '' 2s 00:21:48.545 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:48.545 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:48.545 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: 00:21:48.545 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:48.545 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:48.545 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:48.545 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: ]] 00:21:48.545 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NmJmZjE0NjFjZTEyMGYzYTE5MzQ1MmNmODkyOGExMmQRMVEG: 00:21:48.545 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:48.545 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:48.545 10:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: 2s 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: ]] 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZTYwZjE5NTlkMWIzODY4ZGQwZTNkZjhlNzg0Y2U0YTlkODU4Y2QwYzQ0MDczZWE3NSoNUQ==: 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:50.459 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:52.372 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:52.372 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:52.372 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:52.372 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:52.632 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:52.632 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:52.632 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:52.632 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.632 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:52.632 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.632 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.632 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.632 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:52.632 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:52.632 10:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:53.203 nvme0n1 00:21:53.203 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:53.203 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.203 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.203 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.203 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:53.203 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:53.773 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:53.773 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.773 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:54.034 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.034 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:54.034 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.034 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.034 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.034 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:54.034 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:54.034 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:54.034 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:54.034 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.294 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.294 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:54.294 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.294 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.294 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.294 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:54.294 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:54.294 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:54.294 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:54.294 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.294 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:54.294 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.294 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:54.294 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:54.864 request: 00:21:54.864 { 00:21:54.864 "name": "nvme0", 00:21:54.864 "dhchap_key": "key1", 00:21:54.864 "dhchap_ctrlr_key": "key3", 00:21:54.864 "method": "bdev_nvme_set_keys", 00:21:54.864 "req_id": 1 00:21:54.864 } 00:21:54.864 Got JSON-RPC error response 00:21:54.864 response: 00:21:54.864 { 00:21:54.864 "code": -13, 00:21:54.864 "message": "Permission denied" 00:21:54.864 } 00:21:54.864 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:54.864 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:54.864 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:54.864 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:54.864 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:54.864 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:54.864 10:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.864 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:54.864 10:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:56.247 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:56.247 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:56.247 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.247 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:56.247 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:56.247 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.247 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.247 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.247 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:56.247 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:56.247 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:56.914 nvme0n1 00:21:56.914 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:56.914 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.914 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.914 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.914 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:56.914 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:56.914 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:56.914 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:56.914 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:56.914 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:56.914 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:56.914 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:56.915 10:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:57.587 request: 00:21:57.587 { 00:21:57.587 "name": "nvme0", 00:21:57.587 "dhchap_key": "key2", 00:21:57.587 "dhchap_ctrlr_key": "key0", 00:21:57.587 "method": "bdev_nvme_set_keys", 00:21:57.587 "req_id": 1 00:21:57.587 } 00:21:57.587 Got JSON-RPC error response 00:21:57.587 response: 00:21:57.587 { 00:21:57.587 "code": -13, 00:21:57.587 "message": "Permission denied" 00:21:57.587 } 00:21:57.587 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:57.587 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:57.587 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:57.587 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:57.587 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:57.587 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:57.587 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.587 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:57.587 10:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:58.550 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:58.550 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.550 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:58.811 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:58.811 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:58.811 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:58.811 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 994938 00:21:58.811 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 994938 ']' 00:21:58.811 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 994938 00:21:58.811 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:58.811 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.811 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 994938 00:21:58.811 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:58.811 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:58.811 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 994938' 00:21:58.811 killing process with pid 994938 00:21:58.811 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 994938 00:21:58.811 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 994938 00:21:59.071 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:59.071 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:59.071 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:59.071 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:59.071 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:59.071 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:59.071 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:59.071 rmmod nvme_tcp 00:21:59.071 rmmod nvme_fabrics 00:21:59.071 rmmod nvme_keyring 00:21:59.071 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:59.071 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:59.071 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:59.071 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1021206 ']' 00:21:59.071 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1021206 00:21:59.072 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1021206 ']' 00:21:59.072 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1021206 00:21:59.072 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:59.072 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.072 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1021206 00:21:59.072 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:59.072 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:59.072 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1021206' 00:21:59.072 killing process with pid 1021206 00:21:59.072 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1021206 00:21:59.072 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1021206 00:21:59.333 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:59.333 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:59.333 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:59.333 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:59.333 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:21:59.333 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:59.333 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:59.333 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:59.333 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:59.333 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.333 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.333 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.244 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:01.244 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.YfY /tmp/spdk.key-sha256.dK6 /tmp/spdk.key-sha384.lUh /tmp/spdk.key-sha512.lVu /tmp/spdk.key-sha512.COM /tmp/spdk.key-sha384.HwY /tmp/spdk.key-sha256.0Vr '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:01.244 00:22:01.244 real 2m36.718s 00:22:01.244 user 5m52.809s 00:22:01.244 sys 0m24.770s 00:22:01.244 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.244 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.244 ************************************ 00:22:01.244 END TEST nvmf_auth_target 00:22:01.244 ************************************ 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:01.506 ************************************ 00:22:01.506 START TEST nvmf_bdevio_no_huge 00:22:01.506 ************************************ 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:01.506 * Looking for test storage... 00:22:01.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:01.506 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:01.507 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:01.507 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:01.507 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:01.507 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:01.507 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:01.507 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:01.507 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:01.507 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:01.507 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:01.507 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:01.507 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:01.507 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:01.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.507 --rc genhtml_branch_coverage=1 00:22:01.507 --rc genhtml_function_coverage=1 00:22:01.507 --rc genhtml_legend=1 00:22:01.507 --rc geninfo_all_blocks=1 00:22:01.507 --rc geninfo_unexecuted_blocks=1 00:22:01.507 00:22:01.507 ' 00:22:01.507 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:01.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.507 --rc genhtml_branch_coverage=1 00:22:01.507 --rc genhtml_function_coverage=1 00:22:01.507 --rc genhtml_legend=1 00:22:01.507 --rc geninfo_all_blocks=1 00:22:01.507 --rc geninfo_unexecuted_blocks=1 00:22:01.507 00:22:01.507 ' 00:22:01.507 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:01.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.507 --rc genhtml_branch_coverage=1 00:22:01.507 --rc genhtml_function_coverage=1 00:22:01.507 --rc genhtml_legend=1 00:22:01.507 --rc geninfo_all_blocks=1 00:22:01.507 --rc geninfo_unexecuted_blocks=1 00:22:01.507 00:22:01.507 ' 00:22:01.507 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:01.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.507 --rc genhtml_branch_coverage=1 00:22:01.507 --rc genhtml_function_coverage=1 00:22:01.507 --rc genhtml_legend=1 00:22:01.507 --rc geninfo_all_blocks=1 00:22:01.507 --rc geninfo_unexecuted_blocks=1 00:22:01.507 00:22:01.507 ' 00:22:01.507 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:01.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:01.769 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:09.916 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:09.916 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:09.916 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:09.916 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:09.916 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:09.917 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:09.917 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:09.917 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:09.917 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:09.917 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.917 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.917 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.917 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:09.917 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.917 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.917 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:09.917 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:09.917 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.917 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.917 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:09.917 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:09.917 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.917 10:49:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:09.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:22:09.917 00:22:09.917 --- 10.0.0.2 ping statistics --- 00:22:09.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.917 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:22:09.917 00:22:09.917 --- 10.0.0.1 ping statistics --- 00:22:09.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.917 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1029372 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1029372 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1029372 ']' 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.917 10:49:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:09.917 [2024-11-19 10:49:48.328553] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:22:09.917 [2024-11-19 10:49:48.328627] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:09.917 [2024-11-19 10:49:48.436429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:09.917 [2024-11-19 10:49:48.497032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.917 [2024-11-19 10:49:48.497073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.917 [2024-11-19 10:49:48.497082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.917 [2024-11-19 10:49:48.497089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.917 [2024-11-19 10:49:48.497095] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.917 [2024-11-19 10:49:48.498569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:09.917 [2024-11-19 10:49:48.498729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:09.917 [2024-11-19 10:49:48.498886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:09.917 [2024-11-19 10:49:48.498887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:10.176 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.176 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:10.177 [2024-11-19 10:49:49.197451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:10.177 Malloc0 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:10.177 [2024-11-19 10:49:49.251370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.177 { 00:22:10.177 "params": { 00:22:10.177 "name": "Nvme$subsystem", 00:22:10.177 "trtype": "$TEST_TRANSPORT", 00:22:10.177 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.177 "adrfam": "ipv4", 00:22:10.177 "trsvcid": "$NVMF_PORT", 00:22:10.177 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.177 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.177 "hdgst": ${hdgst:-false}, 00:22:10.177 "ddgst": ${ddgst:-false} 00:22:10.177 }, 00:22:10.177 "method": "bdev_nvme_attach_controller" 00:22:10.177 } 00:22:10.177 EOF 00:22:10.177 )") 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:10.177 10:49:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:10.177 "params": { 00:22:10.177 "name": "Nvme1", 00:22:10.177 "trtype": "tcp", 00:22:10.177 "traddr": "10.0.0.2", 00:22:10.177 "adrfam": "ipv4", 00:22:10.177 "trsvcid": "4420", 00:22:10.177 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.177 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:10.177 "hdgst": false, 00:22:10.177 "ddgst": false 00:22:10.177 }, 00:22:10.177 "method": "bdev_nvme_attach_controller" 00:22:10.177 }' 00:22:10.177 [2024-11-19 10:49:49.308433] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:22:10.177 [2024-11-19 10:49:49.308505] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1029629 ] 00:22:10.437 [2024-11-19 10:49:49.406146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:10.437 [2024-11-19 10:49:49.468092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.437 [2024-11-19 10:49:49.468254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.437 [2024-11-19 10:49:49.468253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.698 I/O targets: 00:22:10.698 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:10.698 00:22:10.698 00:22:10.698 CUnit - A unit testing framework for C - Version 2.1-3 00:22:10.698 http://cunit.sourceforge.net/ 00:22:10.698 00:22:10.698 00:22:10.698 Suite: bdevio tests on: Nvme1n1 00:22:10.698 Test: blockdev write read block ...passed 00:22:10.698 Test: blockdev write zeroes read block ...passed 00:22:10.698 Test: blockdev write zeroes read no split ...passed 00:22:10.698 Test: blockdev write zeroes read split ...passed 00:22:10.698 Test: blockdev write zeroes read split partial ...passed 00:22:10.698 Test: blockdev reset ...[2024-11-19 10:49:49.876868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:10.698 [2024-11-19 10:49:49.876971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a800 (9): Bad file descriptor 00:22:10.698 [2024-11-19 10:49:49.891011] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:10.698 passed 00:22:10.959 Test: blockdev write read 8 blocks ...passed 00:22:10.959 Test: blockdev write read size > 128k ...passed 00:22:10.959 Test: blockdev write read invalid size ...passed 00:22:10.959 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:10.959 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:10.959 Test: blockdev write read max offset ...passed 00:22:10.959 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:10.959 Test: blockdev writev readv 8 blocks ...passed 00:22:10.959 Test: blockdev writev readv 30 x 1block ...passed 00:22:10.959 Test: blockdev writev readv block ...passed 00:22:10.959 Test: blockdev writev readv size > 128k ...passed 00:22:11.220 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:11.221 Test: blockdev comparev and writev ...[2024-11-19 10:49:50.158148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.221 [2024-11-19 10:49:50.158211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.221 [2024-11-19 10:49:50.158229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.221 [2024-11-19 10:49:50.158238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.221 [2024-11-19 10:49:50.158782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.221 [2024-11-19 10:49:50.158799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:11.221 [2024-11-19 10:49:50.158813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.221 [2024-11-19 10:49:50.158822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:11.221 [2024-11-19 10:49:50.159333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.221 [2024-11-19 10:49:50.159346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:11.221 [2024-11-19 10:49:50.159368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.221 [2024-11-19 10:49:50.159376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:11.221 [2024-11-19 10:49:50.159878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.221 [2024-11-19 10:49:50.159892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:11.221 [2024-11-19 10:49:50.159906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.221 [2024-11-19 10:49:50.159916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:11.221 passed 00:22:11.221 Test: blockdev nvme passthru rw ...passed 00:22:11.221 Test: blockdev nvme passthru vendor specific ...[2024-11-19 10:49:50.245106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:11.221 [2024-11-19 10:49:50.245123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:11.221 [2024-11-19 10:49:50.245500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:11.221 [2024-11-19 10:49:50.245514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:11.221 [2024-11-19 10:49:50.245895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:11.221 [2024-11-19 10:49:50.245909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:11.221 [2024-11-19 10:49:50.246285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:11.221 [2024-11-19 10:49:50.246298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:11.221 passed 00:22:11.221 Test: blockdev nvme admin passthru ...passed 00:22:11.221 Test: blockdev copy ...passed 00:22:11.221 00:22:11.221 Run Summary: Type Total Ran Passed Failed Inactive 00:22:11.221 suites 1 1 n/a 0 0 00:22:11.221 tests 23 23 23 0 0 00:22:11.221 asserts 152 152 152 0 n/a 00:22:11.221 00:22:11.221 Elapsed time = 1.224 seconds 00:22:11.482 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:11.482 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.482 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:11.482 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.482 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:11.482 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:11.482 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:11.482 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:11.482 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:11.482 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:11.482 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:11.483 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:11.483 rmmod nvme_tcp 00:22:11.483 rmmod nvme_fabrics 00:22:11.744 rmmod nvme_keyring 00:22:11.744 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:11.744 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:11.744 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:11.744 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1029372 ']' 00:22:11.744 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1029372 00:22:11.744 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1029372 ']' 00:22:11.744 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1029372 00:22:11.744 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:11.744 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:11.744 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1029372 00:22:11.744 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:11.744 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:11.744 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1029372' 00:22:11.744 killing process with pid 1029372 00:22:11.744 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1029372 00:22:11.744 10:49:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1029372 00:22:12.006 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:12.006 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:12.006 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:12.006 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:12.006 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:12.006 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:12.006 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:12.006 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:12.006 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:12.006 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.006 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.006 10:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.921 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:13.921 00:22:13.921 real 0m12.592s 00:22:13.921 user 0m14.493s 00:22:13.921 sys 0m6.657s 00:22:13.921 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.921 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:13.921 ************************************ 00:22:13.921 END TEST nvmf_bdevio_no_huge 00:22:13.921 ************************************ 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:14.183 ************************************ 00:22:14.183 START TEST nvmf_tls 00:22:14.183 ************************************ 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:14.183 * Looking for test storage... 00:22:14.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:14.183 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:14.184 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:14.184 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:14.184 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:14.184 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:14.184 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:14.184 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:14.184 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:14.184 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:14.184 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:14.184 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:14.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.446 --rc genhtml_branch_coverage=1 00:22:14.446 --rc genhtml_function_coverage=1 00:22:14.446 --rc genhtml_legend=1 00:22:14.446 --rc geninfo_all_blocks=1 00:22:14.446 --rc geninfo_unexecuted_blocks=1 00:22:14.446 00:22:14.446 ' 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:14.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.446 --rc genhtml_branch_coverage=1 00:22:14.446 --rc genhtml_function_coverage=1 00:22:14.446 --rc genhtml_legend=1 00:22:14.446 --rc geninfo_all_blocks=1 00:22:14.446 --rc geninfo_unexecuted_blocks=1 00:22:14.446 00:22:14.446 ' 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:14.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.446 --rc genhtml_branch_coverage=1 00:22:14.446 --rc genhtml_function_coverage=1 00:22:14.446 --rc genhtml_legend=1 00:22:14.446 --rc geninfo_all_blocks=1 00:22:14.446 --rc geninfo_unexecuted_blocks=1 00:22:14.446 00:22:14.446 ' 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:14.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.446 --rc genhtml_branch_coverage=1 00:22:14.446 --rc genhtml_function_coverage=1 00:22:14.446 --rc genhtml_legend=1 00:22:14.446 --rc geninfo_all_blocks=1 00:22:14.446 --rc geninfo_unexecuted_blocks=1 00:22:14.446 00:22:14.446 ' 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.446 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:14.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:14.447 10:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:22.594 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:22.594 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:22.594 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:22.594 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.594 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:22.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:22:22.595 00:22:22.595 --- 10.0.0.2 ping statistics --- 00:22:22.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.595 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:22:22.595 00:22:22.595 --- 10.0.0.1 ping statistics --- 00:22:22.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.595 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1034069 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1034069 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1034069 ']' 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:22.595 10:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.595 [2024-11-19 10:50:00.960640] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:22:22.595 [2024-11-19 10:50:00.960701] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.595 [2024-11-19 10:50:01.063931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.595 [2024-11-19 10:50:01.115106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.595 [2024-11-19 10:50:01.115154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.595 [2024-11-19 10:50:01.115172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.595 [2024-11-19 10:50:01.115179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.595 [2024-11-19 10:50:01.115186] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.595 [2024-11-19 10:50:01.115884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.856 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.856 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:22.856 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:22.856 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:22.856 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.856 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.856 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:22.856 10:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:22.856 true 00:22:22.856 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:22.856 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:23.118 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:23.118 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:23.118 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:23.379 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:23.379 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:23.641 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:23.641 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:23.641 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:23.641 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:23.641 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:23.902 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:23.902 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:23.902 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:23.902 10:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:24.163 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:24.163 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:24.163 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:24.163 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:24.163 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:24.424 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:24.424 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:24.424 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:24.685 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:24.685 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.6SYUp7ulQG 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.J6rghn6oyf 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.6SYUp7ulQG 00:22:24.946 10:50:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.J6rghn6oyf 00:22:24.946 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:25.206 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:25.206 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.6SYUp7ulQG 00:22:25.206 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6SYUp7ulQG 00:22:25.206 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:25.466 [2024-11-19 10:50:04.549889] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.466 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:25.726 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:25.726 [2024-11-19 10:50:04.886714] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:25.726 [2024-11-19 10:50:04.886912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.726 10:50:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:25.988 malloc0 00:22:25.988 10:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:26.250 10:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6SYUp7ulQG 00:22:26.250 10:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:26.511 10:50:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.6SYUp7ulQG 00:22:36.512 Initializing NVMe Controllers 00:22:36.512 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:36.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:36.512 Initialization complete. Launching workers. 00:22:36.512 ======================================================== 00:22:36.512 Latency(us) 00:22:36.512 Device Information : IOPS MiB/s Average min max 00:22:36.512 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18855.48 73.65 3394.41 1127.88 4092.27 00:22:36.512 ======================================================== 00:22:36.512 Total : 18855.48 73.65 3394.41 1127.88 4092.27 00:22:36.512 00:22:36.773 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6SYUp7ulQG 00:22:36.773 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:36.773 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:36.773 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:36.773 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6SYUp7ulQG 00:22:36.773 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:36.773 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1037029 00:22:36.773 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:36.773 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1037029 /var/tmp/bdevperf.sock 00:22:36.773 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:36.773 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1037029 ']' 00:22:36.773 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.773 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:36.773 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.773 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:36.773 10:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.773 [2024-11-19 10:50:15.766210] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:22:36.773 [2024-11-19 10:50:15.766267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1037029 ] 00:22:36.773 [2024-11-19 10:50:15.855039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.773 [2024-11-19 10:50:15.890105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.714 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:37.714 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:37.714 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6SYUp7ulQG 00:22:37.714 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:37.714 [2024-11-19 10:50:16.869453] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:37.975 TLSTESTn1 00:22:37.975 10:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:37.975 Running I/O for 10 seconds... 00:22:40.302 4664.00 IOPS, 18.22 MiB/s [2024-11-19T09:50:20.067Z] 4458.00 IOPS, 17.41 MiB/s [2024-11-19T09:50:21.451Z] 5058.00 IOPS, 19.76 MiB/s [2024-11-19T09:50:22.393Z] 5163.75 IOPS, 20.17 MiB/s [2024-11-19T09:50:23.335Z] 5309.60 IOPS, 20.74 MiB/s [2024-11-19T09:50:24.277Z] 5210.00 IOPS, 20.35 MiB/s [2024-11-19T09:50:25.218Z] 5333.57 IOPS, 20.83 MiB/s [2024-11-19T09:50:26.160Z] 5355.62 IOPS, 20.92 MiB/s [2024-11-19T09:50:27.101Z] 5358.56 IOPS, 20.93 MiB/s [2024-11-19T09:50:27.101Z] 5413.90 IOPS, 21.15 MiB/s 00:22:47.906 Latency(us) 00:22:47.906 [2024-11-19T09:50:27.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.906 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:47.906 Verification LBA range: start 0x0 length 0x2000 00:22:47.906 TLSTESTn1 : 10.01 5420.45 21.17 0.00 0.00 23580.27 4369.07 36700.16 00:22:47.906 [2024-11-19T09:50:27.101Z] =================================================================================================================== 00:22:47.906 [2024-11-19T09:50:27.101Z] Total : 5420.45 21.17 0.00 0.00 23580.27 4369.07 36700.16 00:22:47.906 { 00:22:47.906 "results": [ 00:22:47.906 { 00:22:47.906 "job": "TLSTESTn1", 00:22:47.906 "core_mask": "0x4", 00:22:47.906 "workload": "verify", 00:22:47.906 "status": "finished", 00:22:47.906 "verify_range": { 00:22:47.906 "start": 0, 00:22:47.906 "length": 8192 00:22:47.906 }, 00:22:47.906 "queue_depth": 128, 00:22:47.906 "io_size": 4096, 00:22:47.906 "runtime": 10.011166, 00:22:47.906 "iops": 5420.447528289911, 00:22:47.906 "mibps": 21.173623157382465, 00:22:47.906 "io_failed": 0, 00:22:47.906 "io_timeout": 0, 00:22:47.906 "avg_latency_us": 23580.265784084277, 00:22:47.906 "min_latency_us": 4369.066666666667, 00:22:47.906 "max_latency_us": 36700.16 00:22:47.906 } 00:22:47.906 ], 00:22:47.906 "core_count": 1 00:22:47.906 } 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1037029 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1037029 ']' 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1037029 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1037029 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1037029' 00:22:48.166 killing process with pid 1037029 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1037029 00:22:48.166 Received shutdown signal, test time was about 10.000000 seconds 00:22:48.166 00:22:48.166 Latency(us) 00:22:48.166 [2024-11-19T09:50:27.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.166 [2024-11-19T09:50:27.361Z] =================================================================================================================== 00:22:48.166 [2024-11-19T09:50:27.361Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1037029 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.J6rghn6oyf 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.J6rghn6oyf 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.166 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.J6rghn6oyf 00:22:48.167 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:48.167 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:48.167 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:48.167 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.J6rghn6oyf 00:22:48.167 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.167 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1039155 00:22:48.167 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:48.167 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1039155 /var/tmp/bdevperf.sock 00:22:48.167 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:48.167 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1039155 ']' 00:22:48.167 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.167 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.167 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.167 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.167 10:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.167 [2024-11-19 10:50:27.334639] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:22:48.167 [2024-11-19 10:50:27.334697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1039155 ] 00:22:48.427 [2024-11-19 10:50:27.417676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.427 [2024-11-19 10:50:27.446487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.997 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.997 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:48.997 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.J6rghn6oyf 00:22:49.258 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:49.258 [2024-11-19 10:50:28.453179] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.518 [2024-11-19 10:50:28.459081] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:49.518 [2024-11-19 10:50:28.459229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c5bb0 (107): Transport endpoint is not connected 00:22:49.518 [2024-11-19 10:50:28.460224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c5bb0 (9): Bad file descriptor 00:22:49.518 [2024-11-19 10:50:28.461225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:49.518 [2024-11-19 10:50:28.461234] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:49.518 [2024-11-19 10:50:28.461240] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:49.518 [2024-11-19 10:50:28.461251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:49.518 request: 00:22:49.518 { 00:22:49.518 "name": "TLSTEST", 00:22:49.518 "trtype": "tcp", 00:22:49.518 "traddr": "10.0.0.2", 00:22:49.518 "adrfam": "ipv4", 00:22:49.518 "trsvcid": "4420", 00:22:49.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:49.518 "prchk_reftag": false, 00:22:49.518 "prchk_guard": false, 00:22:49.518 "hdgst": false, 00:22:49.518 "ddgst": false, 00:22:49.518 "psk": "key0", 00:22:49.518 "allow_unrecognized_csi": false, 00:22:49.518 "method": "bdev_nvme_attach_controller", 00:22:49.518 "req_id": 1 00:22:49.518 } 00:22:49.518 Got JSON-RPC error response 00:22:49.518 response: 00:22:49.518 { 00:22:49.518 "code": -5, 00:22:49.518 "message": "Input/output error" 00:22:49.518 } 00:22:49.518 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1039155 00:22:49.518 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1039155 ']' 00:22:49.518 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1039155 00:22:49.518 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:49.518 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.518 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1039155 00:22:49.518 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:49.518 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:49.518 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1039155' 00:22:49.518 killing process with pid 1039155 00:22:49.518 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1039155 00:22:49.518 Received shutdown signal, test time was about 10.000000 seconds 00:22:49.518 00:22:49.518 Latency(us) 00:22:49.518 [2024-11-19T09:50:28.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.518 [2024-11-19T09:50:28.713Z] =================================================================================================================== 00:22:49.519 [2024-11-19T09:50:28.714Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1039155 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.6SYUp7ulQG 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.6SYUp7ulQG 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.6SYUp7ulQG 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6SYUp7ulQG 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1039493 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1039493 /var/tmp/bdevperf.sock 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1039493 ']' 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.519 10:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.519 [2024-11-19 10:50:28.709238] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:22:49.519 [2024-11-19 10:50:28.709292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1039493 ] 00:22:49.779 [2024-11-19 10:50:28.794238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.779 [2024-11-19 10:50:28.823175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.350 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.350 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:50.350 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6SYUp7ulQG 00:22:50.611 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:50.872 [2024-11-19 10:50:29.841551] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:50.872 [2024-11-19 10:50:29.852392] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:50.872 [2024-11-19 10:50:29.852415] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:50.872 [2024-11-19 10:50:29.852434] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:50.872 [2024-11-19 10:50:29.852842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb2bb0 (107): Transport endpoint is not connected 00:22:50.872 [2024-11-19 10:50:29.853837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb2bb0 (9): Bad file descriptor 00:22:50.872 [2024-11-19 10:50:29.854839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:50.872 [2024-11-19 10:50:29.854847] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:50.872 [2024-11-19 10:50:29.854855] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:50.872 [2024-11-19 10:50:29.854862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:50.872 request: 00:22:50.872 { 00:22:50.872 "name": "TLSTEST", 00:22:50.872 "trtype": "tcp", 00:22:50.872 "traddr": "10.0.0.2", 00:22:50.872 "adrfam": "ipv4", 00:22:50.872 "trsvcid": "4420", 00:22:50.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.872 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:50.872 "prchk_reftag": false, 00:22:50.872 "prchk_guard": false, 00:22:50.872 "hdgst": false, 00:22:50.872 "ddgst": false, 00:22:50.872 "psk": "key0", 00:22:50.872 "allow_unrecognized_csi": false, 00:22:50.872 "method": "bdev_nvme_attach_controller", 00:22:50.872 "req_id": 1 00:22:50.872 } 00:22:50.872 Got JSON-RPC error response 00:22:50.872 response: 00:22:50.872 { 00:22:50.872 "code": -5, 00:22:50.872 "message": "Input/output error" 00:22:50.872 } 00:22:50.872 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1039493 00:22:50.872 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1039493 ']' 00:22:50.872 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1039493 00:22:50.872 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:50.872 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:50.872 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1039493 00:22:50.872 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:50.872 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:50.872 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1039493' 00:22:50.872 killing process with pid 1039493 00:22:50.872 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1039493 00:22:50.872 Received shutdown signal, test time was about 10.000000 seconds 00:22:50.872 00:22:50.872 Latency(us) 00:22:50.872 [2024-11-19T09:50:30.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.872 [2024-11-19T09:50:30.067Z] =================================================================================================================== 00:22:50.872 [2024-11-19T09:50:30.067Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:50.872 10:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1039493 00:22:50.872 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.6SYUp7ulQG 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.6SYUp7ulQG 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.6SYUp7ulQG 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6SYUp7ulQG 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1039840 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1039840 /var/tmp/bdevperf.sock 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1039840 ']' 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.873 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.134 [2024-11-19 10:50:30.105408] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:22:51.134 [2024-11-19 10:50:30.105463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1039840 ] 00:22:51.134 [2024-11-19 10:50:30.191406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.134 [2024-11-19 10:50:30.219678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.077 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.077 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:52.077 10:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6SYUp7ulQG 00:22:52.077 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:52.077 [2024-11-19 10:50:31.234233] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:52.077 [2024-11-19 10:50:31.245787] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:52.077 [2024-11-19 10:50:31.245808] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:52.077 [2024-11-19 10:50:31.245828] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:52.077 [2024-11-19 10:50:31.246583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f34bb0 (107): Transport endpoint is not connected 00:22:52.077 [2024-11-19 10:50:31.247579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f34bb0 (9): Bad file descriptor 00:22:52.077 [2024-11-19 10:50:31.248580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:22:52.077 [2024-11-19 10:50:31.248589] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:52.077 [2024-11-19 10:50:31.248594] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:52.077 [2024-11-19 10:50:31.248602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:22:52.077 request: 00:22:52.077 { 00:22:52.077 "name": "TLSTEST", 00:22:52.077 "trtype": "tcp", 00:22:52.077 "traddr": "10.0.0.2", 00:22:52.077 "adrfam": "ipv4", 00:22:52.077 "trsvcid": "4420", 00:22:52.077 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:52.077 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:52.077 "prchk_reftag": false, 00:22:52.078 "prchk_guard": false, 00:22:52.078 "hdgst": false, 00:22:52.078 "ddgst": false, 00:22:52.078 "psk": "key0", 00:22:52.078 "allow_unrecognized_csi": false, 00:22:52.078 "method": "bdev_nvme_attach_controller", 00:22:52.078 "req_id": 1 00:22:52.078 } 00:22:52.078 Got JSON-RPC error response 00:22:52.078 response: 00:22:52.078 { 00:22:52.078 "code": -5, 00:22:52.078 "message": "Input/output error" 00:22:52.078 } 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1039840 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1039840 ']' 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1039840 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1039840 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1039840' 00:22:52.339 killing process with pid 1039840 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1039840 00:22:52.339 Received shutdown signal, test time was about 10.000000 seconds 00:22:52.339 00:22:52.339 Latency(us) 00:22:52.339 [2024-11-19T09:50:31.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.339 [2024-11-19T09:50:31.534Z] =================================================================================================================== 00:22:52.339 [2024-11-19T09:50:31.534Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1039840 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:52.339 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:52.340 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:52.340 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:52.340 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:52.340 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:52.340 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:52.340 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:52.340 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.340 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1040181 00:22:52.340 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:52.340 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1040181 /var/tmp/bdevperf.sock 00:22:52.340 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:52.340 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1040181 ']' 00:22:52.340 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.340 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.340 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.340 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.340 10:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.340 [2024-11-19 10:50:31.493675] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:22:52.340 [2024-11-19 10:50:31.493730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040181 ] 00:22:52.600 [2024-11-19 10:50:31.578524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.600 [2024-11-19 10:50:31.606194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.172 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.172 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:53.172 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:53.433 [2024-11-19 10:50:32.444175] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:53.433 [2024-11-19 10:50:32.444200] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:53.433 request: 00:22:53.433 { 00:22:53.433 "name": "key0", 00:22:53.433 "path": "", 00:22:53.433 "method": "keyring_file_add_key", 00:22:53.433 "req_id": 1 00:22:53.433 } 00:22:53.433 Got JSON-RPC error response 00:22:53.433 response: 00:22:53.433 { 00:22:53.433 "code": -1, 00:22:53.433 "message": "Operation not permitted" 00:22:53.433 } 00:22:53.433 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:53.433 [2024-11-19 10:50:32.628711] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.433 [2024-11-19 10:50:32.628734] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:53.695 request: 00:22:53.695 { 00:22:53.695 "name": "TLSTEST", 00:22:53.695 "trtype": "tcp", 00:22:53.695 "traddr": "10.0.0.2", 00:22:53.695 "adrfam": "ipv4", 00:22:53.695 "trsvcid": "4420", 00:22:53.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.695 "prchk_reftag": false, 00:22:53.695 "prchk_guard": false, 00:22:53.695 "hdgst": false, 00:22:53.695 "ddgst": false, 00:22:53.695 "psk": "key0", 00:22:53.695 "allow_unrecognized_csi": false, 00:22:53.695 "method": "bdev_nvme_attach_controller", 00:22:53.695 "req_id": 1 00:22:53.695 } 00:22:53.695 Got JSON-RPC error response 00:22:53.695 response: 00:22:53.695 { 00:22:53.695 "code": -126, 00:22:53.695 "message": "Required key not available" 00:22:53.695 } 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1040181 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1040181 ']' 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1040181 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1040181 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1040181' 00:22:53.695 killing process with pid 1040181 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1040181 00:22:53.695 Received shutdown signal, test time was about 10.000000 seconds 00:22:53.695 00:22:53.695 Latency(us) 00:22:53.695 [2024-11-19T09:50:32.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.695 [2024-11-19T09:50:32.890Z] =================================================================================================================== 00:22:53.695 [2024-11-19T09:50:32.890Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1040181 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1034069 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1034069 ']' 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1034069 00:22:53.695 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:53.696 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.696 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1034069 00:22:53.696 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:53.696 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:53.696 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1034069' 00:22:53.696 killing process with pid 1034069 00:22:53.696 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1034069 00:22:53.696 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1034069 00:22:53.957 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:53.957 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:53.957 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:53.957 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:53.957 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:53.957 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:22:53.957 10:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:53.957 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:53.957 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:53.957 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.CEomI34Ee9 00:22:53.957 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:53.957 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.CEomI34Ee9 00:22:53.958 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:53.958 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:53.958 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:53.958 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.958 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1040488 00:22:53.958 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1040488 00:22:53.958 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:53.958 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1040488 ']' 00:22:53.958 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.958 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.958 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.958 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.958 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.958 [2024-11-19 10:50:33.109104] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:22:53.958 [2024-11-19 10:50:33.109173] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.218 [2024-11-19 10:50:33.201932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.218 [2024-11-19 10:50:33.232277] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.218 [2024-11-19 10:50:33.232304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.218 [2024-11-19 10:50:33.232310] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.218 [2024-11-19 10:50:33.232315] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.218 [2024-11-19 10:50:33.232319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.218 [2024-11-19 10:50:33.232782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.792 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.792 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:54.792 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:54.792 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:54.792 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.792 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.792 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.CEomI34Ee9 00:22:54.792 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CEomI34Ee9 00:22:54.792 10:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:55.053 [2024-11-19 10:50:34.080367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.053 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:55.314 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:55.314 [2024-11-19 10:50:34.397138] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:55.314 [2024-11-19 10:50:34.397350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.314 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:55.575 malloc0 00:22:55.575 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:55.575 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CEomI34Ee9 00:22:55.836 10:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:56.099 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CEomI34Ee9 00:22:56.099 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:56.099 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:56.099 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:56.099 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.CEomI34Ee9 00:22:56.099 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:56.099 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1040895 00:22:56.099 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:56.099 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1040895 /var/tmp/bdevperf.sock 00:22:56.099 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:56.099 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1040895 ']' 00:22:56.099 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.099 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.099 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.099 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.099 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.099 [2024-11-19 10:50:35.123453] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:22:56.099 [2024-11-19 10:50:35.123507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040895 ] 00:22:56.099 [2024-11-19 10:50:35.205444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.099 [2024-11-19 10:50:35.234601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.040 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.040 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:57.040 10:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CEomI34Ee9 00:22:57.040 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:57.041 [2024-11-19 10:50:36.209156] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:57.302 TLSTESTn1 00:22:57.302 10:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:57.302 Running I/O for 10 seconds... 00:22:59.632 5370.00 IOPS, 20.98 MiB/s [2024-11-19T09:50:39.768Z] 5900.50 IOPS, 23.05 MiB/s [2024-11-19T09:50:40.710Z] 5887.33 IOPS, 23.00 MiB/s [2024-11-19T09:50:41.653Z] 5801.25 IOPS, 22.66 MiB/s [2024-11-19T09:50:42.594Z] 5668.60 IOPS, 22.14 MiB/s [2024-11-19T09:50:43.536Z] 5604.50 IOPS, 21.89 MiB/s [2024-11-19T09:50:44.479Z] 5523.43 IOPS, 21.58 MiB/s [2024-11-19T09:50:45.423Z] 5443.88 IOPS, 21.27 MiB/s [2024-11-19T09:50:46.820Z] 5285.11 IOPS, 20.64 MiB/s [2024-11-19T09:50:46.820Z] 5265.90 IOPS, 20.57 MiB/s 00:23:07.625 Latency(us) 00:23:07.625 [2024-11-19T09:50:46.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.625 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:07.625 Verification LBA range: start 0x0 length 0x2000 00:23:07.625 TLSTESTn1 : 10.02 5269.90 20.59 0.00 0.00 24253.43 4560.21 46530.56 00:23:07.625 [2024-11-19T09:50:46.820Z] =================================================================================================================== 00:23:07.625 [2024-11-19T09:50:46.820Z] Total : 5269.90 20.59 0.00 0.00 24253.43 4560.21 46530.56 00:23:07.625 { 00:23:07.625 "results": [ 00:23:07.625 { 00:23:07.625 "job": "TLSTESTn1", 00:23:07.625 "core_mask": "0x4", 00:23:07.625 "workload": "verify", 00:23:07.625 "status": "finished", 00:23:07.625 "verify_range": { 00:23:07.625 "start": 0, 00:23:07.625 "length": 8192 00:23:07.625 }, 00:23:07.625 "queue_depth": 128, 00:23:07.625 "io_size": 4096, 00:23:07.625 "runtime": 10.016507, 00:23:07.625 "iops": 5269.900974461457, 00:23:07.625 "mibps": 20.585550681490066, 00:23:07.625 "io_failed": 0, 00:23:07.625 "io_timeout": 0, 00:23:07.625 "avg_latency_us": 24253.426097323787, 00:23:07.625 "min_latency_us": 4560.213333333333, 00:23:07.625 "max_latency_us": 46530.56 00:23:07.625 } 00:23:07.625 ], 00:23:07.625 "core_count": 1 00:23:07.625 } 00:23:07.625 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:07.625 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1040895 00:23:07.625 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1040895 ']' 00:23:07.625 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1040895 00:23:07.625 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:07.625 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.625 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1040895 00:23:07.625 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:07.625 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:07.625 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1040895' 00:23:07.625 killing process with pid 1040895 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1040895 00:23:07.626 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.626 00:23:07.626 Latency(us) 00:23:07.626 [2024-11-19T09:50:46.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.626 [2024-11-19T09:50:46.821Z] =================================================================================================================== 00:23:07.626 [2024-11-19T09:50:46.821Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1040895 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.CEomI34Ee9 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CEomI34Ee9 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CEomI34Ee9 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CEomI34Ee9 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.CEomI34Ee9 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1043057 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1043057 /var/tmp/bdevperf.sock 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1043057 ']' 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.626 10:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.626 [2024-11-19 10:50:46.683599] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:23:07.626 [2024-11-19 10:50:46.683661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043057 ] 00:23:07.626 [2024-11-19 10:50:46.764872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.626 [2024-11-19 10:50:46.793578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.569 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.569 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:08.569 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CEomI34Ee9 00:23:08.569 [2024-11-19 10:50:47.615517] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.CEomI34Ee9': 0100666 00:23:08.569 [2024-11-19 10:50:47.615536] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:08.569 request: 00:23:08.569 { 00:23:08.569 "name": "key0", 00:23:08.569 "path": "/tmp/tmp.CEomI34Ee9", 00:23:08.569 "method": "keyring_file_add_key", 00:23:08.569 "req_id": 1 00:23:08.569 } 00:23:08.569 Got JSON-RPC error response 00:23:08.569 response: 00:23:08.569 { 00:23:08.569 "code": -1, 00:23:08.569 "message": "Operation not permitted" 00:23:08.569 } 00:23:08.569 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:08.830 [2024-11-19 10:50:47.784011] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.830 [2024-11-19 10:50:47.784033] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:08.830 request: 00:23:08.830 { 00:23:08.830 "name": "TLSTEST", 00:23:08.830 "trtype": "tcp", 00:23:08.830 "traddr": "10.0.0.2", 00:23:08.830 "adrfam": "ipv4", 00:23:08.830 "trsvcid": "4420", 00:23:08.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:08.830 "prchk_reftag": false, 00:23:08.830 "prchk_guard": false, 00:23:08.830 "hdgst": false, 00:23:08.830 "ddgst": false, 00:23:08.830 "psk": "key0", 00:23:08.830 "allow_unrecognized_csi": false, 00:23:08.830 "method": "bdev_nvme_attach_controller", 00:23:08.830 "req_id": 1 00:23:08.830 } 00:23:08.830 Got JSON-RPC error response 00:23:08.830 response: 00:23:08.830 { 00:23:08.830 "code": -126, 00:23:08.830 "message": "Required key not available" 00:23:08.830 } 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1043057 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1043057 ']' 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1043057 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1043057 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1043057' 00:23:08.830 killing process with pid 1043057 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1043057 00:23:08.830 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.830 00:23:08.830 Latency(us) 00:23:08.830 [2024-11-19T09:50:48.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.830 [2024-11-19T09:50:48.025Z] =================================================================================================================== 00:23:08.830 [2024-11-19T09:50:48.025Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1043057 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1040488 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1040488 ']' 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1040488 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.830 10:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1040488 00:23:09.091 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:09.091 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:09.091 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1040488' 00:23:09.091 killing process with pid 1040488 00:23:09.091 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1040488 00:23:09.091 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1040488 00:23:09.091 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:09.091 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.091 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.091 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.091 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1043302 00:23:09.091 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1043302 00:23:09.091 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:09.091 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1043302 ']' 00:23:09.091 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.091 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.091 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.091 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.091 10:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.091 [2024-11-19 10:50:48.212464] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:23:09.091 [2024-11-19 10:50:48.212519] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.352 [2024-11-19 10:50:48.301227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.352 [2024-11-19 10:50:48.329640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.352 [2024-11-19 10:50:48.329671] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.352 [2024-11-19 10:50:48.329677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.352 [2024-11-19 10:50:48.329685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.352 [2024-11-19 10:50:48.329693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.352 [2024-11-19 10:50:48.330184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.923 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.923 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:09.923 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:09.923 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:09.923 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.923 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.923 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.CEomI34Ee9 00:23:09.923 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:09.923 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.CEomI34Ee9 00:23:09.923 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:09.923 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:09.923 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:09.923 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:09.923 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.CEomI34Ee9 00:23:09.923 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CEomI34Ee9 00:23:09.923 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:10.183 [2024-11-19 10:50:49.213525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.183 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:10.443 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:10.443 [2024-11-19 10:50:49.546346] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:10.443 [2024-11-19 10:50:49.546534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.443 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:10.705 malloc0 00:23:10.705 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:10.705 10:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CEomI34Ee9 00:23:10.966 [2024-11-19 10:50:50.029374] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.CEomI34Ee9': 0100666 00:23:10.966 [2024-11-19 10:50:50.029400] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:10.966 request: 00:23:10.966 { 00:23:10.966 "name": "key0", 00:23:10.966 "path": "/tmp/tmp.CEomI34Ee9", 00:23:10.966 "method": "keyring_file_add_key", 00:23:10.966 "req_id": 1 00:23:10.966 } 00:23:10.966 Got JSON-RPC error response 00:23:10.966 response: 00:23:10.966 { 00:23:10.966 "code": -1, 00:23:10.966 "message": "Operation not permitted" 00:23:10.966 } 00:23:10.966 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:11.227 [2024-11-19 10:50:50.197817] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:11.227 [2024-11-19 10:50:50.197854] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:11.227 request: 00:23:11.227 { 00:23:11.227 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.227 "host": "nqn.2016-06.io.spdk:host1", 00:23:11.227 "psk": "key0", 00:23:11.227 "method": "nvmf_subsystem_add_host", 00:23:11.227 "req_id": 1 00:23:11.227 } 00:23:11.227 Got JSON-RPC error response 00:23:11.227 response: 00:23:11.227 { 00:23:11.227 "code": -32603, 00:23:11.227 "message": "Internal error" 00:23:11.227 } 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1043302 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1043302 ']' 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1043302 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1043302 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1043302' 00:23:11.227 killing process with pid 1043302 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1043302 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1043302 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.CEomI34Ee9 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1043930 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1043930 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1043930 ']' 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.227 10:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.487 [2024-11-19 10:50:50.474066] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:23:11.487 [2024-11-19 10:50:50.474126] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.487 [2024-11-19 10:50:50.562586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.487 [2024-11-19 10:50:50.592350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.487 [2024-11-19 10:50:50.592377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.487 [2024-11-19 10:50:50.592383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.487 [2024-11-19 10:50:50.592388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.487 [2024-11-19 10:50:50.592392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.487 [2024-11-19 10:50:50.592832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.058 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.058 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:12.058 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:12.058 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:12.058 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.319 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.319 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.CEomI34Ee9 00:23:12.319 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CEomI34Ee9 00:23:12.319 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:12.319 [2024-11-19 10:50:51.444287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.319 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:12.579 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:12.840 [2024-11-19 10:50:51.777111] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:12.840 [2024-11-19 10:50:51.777310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.840 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:12.840 malloc0 00:23:12.840 10:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:13.100 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CEomI34Ee9 00:23:13.100 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:13.361 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1044324 00:23:13.361 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:13.361 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:13.361 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1044324 /var/tmp/bdevperf.sock 00:23:13.361 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1044324 ']' 00:23:13.361 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.361 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.361 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.361 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.361 10:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.361 [2024-11-19 10:50:52.509871] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:23:13.361 [2024-11-19 10:50:52.509925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044324 ] 00:23:13.623 [2024-11-19 10:50:52.591527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.623 [2024-11-19 10:50:52.620650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.193 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.193 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:14.193 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CEomI34Ee9 00:23:14.453 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:14.453 [2024-11-19 10:50:53.599229] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:14.715 TLSTESTn1 00:23:14.715 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:14.977 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:14.977 "subsystems": [ 00:23:14.977 { 00:23:14.977 "subsystem": "keyring", 00:23:14.977 "config": [ 00:23:14.977 { 00:23:14.977 "method": "keyring_file_add_key", 00:23:14.977 "params": { 00:23:14.977 "name": "key0", 00:23:14.977 "path": "/tmp/tmp.CEomI34Ee9" 00:23:14.977 } 00:23:14.977 } 00:23:14.977 ] 00:23:14.977 }, 00:23:14.977 { 00:23:14.977 "subsystem": "iobuf", 00:23:14.977 "config": [ 00:23:14.977 { 00:23:14.977 "method": "iobuf_set_options", 00:23:14.977 "params": { 00:23:14.977 "small_pool_count": 8192, 00:23:14.977 "large_pool_count": 1024, 00:23:14.977 "small_bufsize": 8192, 00:23:14.977 "large_bufsize": 135168, 00:23:14.977 "enable_numa": false 00:23:14.977 } 00:23:14.977 } 00:23:14.977 ] 00:23:14.977 }, 00:23:14.977 { 00:23:14.977 "subsystem": "sock", 00:23:14.977 "config": [ 00:23:14.977 { 00:23:14.977 "method": "sock_set_default_impl", 00:23:14.977 "params": { 00:23:14.977 "impl_name": "posix" 00:23:14.977 } 00:23:14.977 }, 00:23:14.977 { 00:23:14.977 "method": "sock_impl_set_options", 00:23:14.977 "params": { 00:23:14.977 "impl_name": "ssl", 00:23:14.977 "recv_buf_size": 4096, 00:23:14.977 "send_buf_size": 4096, 00:23:14.977 "enable_recv_pipe": true, 00:23:14.977 "enable_quickack": false, 00:23:14.977 "enable_placement_id": 0, 00:23:14.977 "enable_zerocopy_send_server": true, 00:23:14.977 "enable_zerocopy_send_client": false, 00:23:14.977 "zerocopy_threshold": 0, 00:23:14.977 "tls_version": 0, 00:23:14.977 "enable_ktls": false 00:23:14.977 } 00:23:14.977 }, 00:23:14.977 { 00:23:14.977 "method": "sock_impl_set_options", 00:23:14.977 "params": { 00:23:14.977 "impl_name": "posix", 00:23:14.977 "recv_buf_size": 2097152, 00:23:14.977 "send_buf_size": 2097152, 00:23:14.977 "enable_recv_pipe": true, 00:23:14.977 "enable_quickack": false, 00:23:14.977 "enable_placement_id": 0, 00:23:14.977 "enable_zerocopy_send_server": true, 00:23:14.977 "enable_zerocopy_send_client": false, 00:23:14.977 "zerocopy_threshold": 0, 00:23:14.977 "tls_version": 0, 00:23:14.977 "enable_ktls": false 00:23:14.977 } 00:23:14.977 } 00:23:14.977 ] 00:23:14.977 }, 00:23:14.977 { 00:23:14.977 "subsystem": "vmd", 00:23:14.977 "config": [] 00:23:14.977 }, 00:23:14.977 { 00:23:14.977 "subsystem": "accel", 00:23:14.978 "config": [ 00:23:14.978 { 00:23:14.978 "method": "accel_set_options", 00:23:14.978 "params": { 00:23:14.978 "small_cache_size": 128, 00:23:14.978 "large_cache_size": 16, 00:23:14.978 "task_count": 2048, 00:23:14.978 "sequence_count": 2048, 00:23:14.978 "buf_count": 2048 00:23:14.978 } 00:23:14.978 } 00:23:14.978 ] 00:23:14.978 }, 00:23:14.978 { 00:23:14.978 "subsystem": "bdev", 00:23:14.978 "config": [ 00:23:14.978 { 00:23:14.978 "method": "bdev_set_options", 00:23:14.978 "params": { 00:23:14.978 "bdev_io_pool_size": 65535, 00:23:14.978 "bdev_io_cache_size": 256, 00:23:14.978 "bdev_auto_examine": true, 00:23:14.978 "iobuf_small_cache_size": 128, 00:23:14.978 "iobuf_large_cache_size": 16 00:23:14.978 } 00:23:14.978 }, 00:23:14.978 { 00:23:14.978 "method": "bdev_raid_set_options", 00:23:14.978 "params": { 00:23:14.978 "process_window_size_kb": 1024, 00:23:14.978 "process_max_bandwidth_mb_sec": 0 00:23:14.978 } 00:23:14.978 }, 00:23:14.978 { 00:23:14.978 "method": "bdev_iscsi_set_options", 00:23:14.978 "params": { 00:23:14.978 "timeout_sec": 30 00:23:14.978 } 00:23:14.978 }, 00:23:14.978 { 00:23:14.978 "method": "bdev_nvme_set_options", 00:23:14.978 "params": { 00:23:14.978 "action_on_timeout": "none", 00:23:14.978 "timeout_us": 0, 00:23:14.978 "timeout_admin_us": 0, 00:23:14.978 "keep_alive_timeout_ms": 10000, 00:23:14.978 "arbitration_burst": 0, 00:23:14.978 "low_priority_weight": 0, 00:23:14.978 "medium_priority_weight": 0, 00:23:14.978 "high_priority_weight": 0, 00:23:14.978 "nvme_adminq_poll_period_us": 10000, 00:23:14.978 "nvme_ioq_poll_period_us": 0, 00:23:14.978 "io_queue_requests": 0, 00:23:14.978 "delay_cmd_submit": true, 00:23:14.978 "transport_retry_count": 4, 00:23:14.978 "bdev_retry_count": 3, 00:23:14.978 "transport_ack_timeout": 0, 00:23:14.978 "ctrlr_loss_timeout_sec": 0, 00:23:14.978 "reconnect_delay_sec": 0, 00:23:14.978 "fast_io_fail_timeout_sec": 0, 00:23:14.978 "disable_auto_failback": false, 00:23:14.978 "generate_uuids": false, 00:23:14.978 "transport_tos": 0, 00:23:14.978 "nvme_error_stat": false, 00:23:14.978 "rdma_srq_size": 0, 00:23:14.978 "io_path_stat": false, 00:23:14.978 "allow_accel_sequence": false, 00:23:14.978 "rdma_max_cq_size": 0, 00:23:14.978 "rdma_cm_event_timeout_ms": 0, 00:23:14.978 "dhchap_digests": [ 00:23:14.978 "sha256", 00:23:14.978 "sha384", 00:23:14.978 "sha512" 00:23:14.978 ], 00:23:14.978 "dhchap_dhgroups": [ 00:23:14.978 "null", 00:23:14.978 "ffdhe2048", 00:23:14.978 "ffdhe3072", 00:23:14.978 "ffdhe4096", 00:23:14.978 "ffdhe6144", 00:23:14.978 "ffdhe8192" 00:23:14.978 ] 00:23:14.978 } 00:23:14.978 }, 00:23:14.978 { 00:23:14.978 "method": "bdev_nvme_set_hotplug", 00:23:14.978 "params": { 00:23:14.978 "period_us": 100000, 00:23:14.978 "enable": false 00:23:14.978 } 00:23:14.978 }, 00:23:14.978 { 00:23:14.978 "method": "bdev_malloc_create", 00:23:14.978 "params": { 00:23:14.978 "name": "malloc0", 00:23:14.978 "num_blocks": 8192, 00:23:14.978 "block_size": 4096, 00:23:14.978 "physical_block_size": 4096, 00:23:14.978 "uuid": "d36db4f2-c0f6-434b-9daa-3ca4e1e30f05", 00:23:14.978 "optimal_io_boundary": 0, 00:23:14.978 "md_size": 0, 00:23:14.978 "dif_type": 0, 00:23:14.978 "dif_is_head_of_md": false, 00:23:14.978 "dif_pi_format": 0 00:23:14.978 } 00:23:14.978 }, 00:23:14.978 { 00:23:14.978 "method": "bdev_wait_for_examine" 00:23:14.978 } 00:23:14.978 ] 00:23:14.978 }, 00:23:14.978 { 00:23:14.978 "subsystem": "nbd", 00:23:14.978 "config": [] 00:23:14.978 }, 00:23:14.978 { 00:23:14.978 "subsystem": "scheduler", 00:23:14.978 "config": [ 00:23:14.978 { 00:23:14.978 "method": "framework_set_scheduler", 00:23:14.978 "params": { 00:23:14.978 "name": "static" 00:23:14.978 } 00:23:14.978 } 00:23:14.978 ] 00:23:14.978 }, 00:23:14.978 { 00:23:14.978 "subsystem": "nvmf", 00:23:14.978 "config": [ 00:23:14.978 { 00:23:14.978 "method": "nvmf_set_config", 00:23:14.978 "params": { 00:23:14.978 "discovery_filter": "match_any", 00:23:14.978 "admin_cmd_passthru": { 00:23:14.978 "identify_ctrlr": false 00:23:14.978 }, 00:23:14.978 "dhchap_digests": [ 00:23:14.978 "sha256", 00:23:14.978 "sha384", 00:23:14.978 "sha512" 00:23:14.978 ], 00:23:14.978 "dhchap_dhgroups": [ 00:23:14.978 "null", 00:23:14.978 "ffdhe2048", 00:23:14.978 "ffdhe3072", 00:23:14.978 "ffdhe4096", 00:23:14.978 "ffdhe6144", 00:23:14.978 "ffdhe8192" 00:23:14.978 ] 00:23:14.978 } 00:23:14.978 }, 00:23:14.978 { 00:23:14.978 "method": "nvmf_set_max_subsystems", 00:23:14.978 "params": { 00:23:14.978 "max_subsystems": 1024 00:23:14.978 } 00:23:14.978 }, 00:23:14.978 { 00:23:14.978 "method": "nvmf_set_crdt", 00:23:14.978 "params": { 00:23:14.978 "crdt1": 0, 00:23:14.978 "crdt2": 0, 00:23:14.978 "crdt3": 0 00:23:14.978 } 00:23:14.978 }, 00:23:14.978 { 00:23:14.978 "method": "nvmf_create_transport", 00:23:14.978 "params": { 00:23:14.978 "trtype": "TCP", 00:23:14.978 "max_queue_depth": 128, 00:23:14.978 "max_io_qpairs_per_ctrlr": 127, 00:23:14.978 "in_capsule_data_size": 4096, 00:23:14.978 "max_io_size": 131072, 00:23:14.978 "io_unit_size": 131072, 00:23:14.978 "max_aq_depth": 128, 00:23:14.978 "num_shared_buffers": 511, 00:23:14.978 "buf_cache_size": 4294967295, 00:23:14.978 "dif_insert_or_strip": false, 00:23:14.978 "zcopy": false, 00:23:14.978 "c2h_success": false, 00:23:14.978 "sock_priority": 0, 00:23:14.978 "abort_timeout_sec": 1, 00:23:14.978 "ack_timeout": 0, 00:23:14.978 "data_wr_pool_size": 0 00:23:14.978 } 00:23:14.978 }, 00:23:14.978 { 00:23:14.978 "method": "nvmf_create_subsystem", 00:23:14.978 "params": { 00:23:14.979 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.979 "allow_any_host": false, 00:23:14.979 "serial_number": "SPDK00000000000001", 00:23:14.979 "model_number": "SPDK bdev Controller", 00:23:14.979 "max_namespaces": 10, 00:23:14.979 "min_cntlid": 1, 00:23:14.979 "max_cntlid": 65519, 00:23:14.979 "ana_reporting": false 00:23:14.979 } 00:23:14.979 }, 00:23:14.979 { 00:23:14.979 "method": "nvmf_subsystem_add_host", 00:23:14.979 "params": { 00:23:14.979 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.979 "host": "nqn.2016-06.io.spdk:host1", 00:23:14.979 "psk": "key0" 00:23:14.979 } 00:23:14.979 }, 00:23:14.979 { 00:23:14.979 "method": "nvmf_subsystem_add_ns", 00:23:14.979 "params": { 00:23:14.979 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.979 "namespace": { 00:23:14.979 "nsid": 1, 00:23:14.979 "bdev_name": "malloc0", 00:23:14.979 "nguid": "D36DB4F2C0F6434B9DAA3CA4E1E30F05", 00:23:14.979 "uuid": "d36db4f2-c0f6-434b-9daa-3ca4e1e30f05", 00:23:14.979 "no_auto_visible": false 00:23:14.979 } 00:23:14.979 } 00:23:14.979 }, 00:23:14.979 { 00:23:14.979 "method": "nvmf_subsystem_add_listener", 00:23:14.979 "params": { 00:23:14.979 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.979 "listen_address": { 00:23:14.979 "trtype": "TCP", 00:23:14.979 "adrfam": "IPv4", 00:23:14.979 "traddr": "10.0.0.2", 00:23:14.979 "trsvcid": "4420" 00:23:14.979 }, 00:23:14.979 "secure_channel": true 00:23:14.979 } 00:23:14.979 } 00:23:14.979 ] 00:23:14.979 } 00:23:14.979 ] 00:23:14.979 }' 00:23:14.979 10:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:15.240 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:15.240 "subsystems": [ 00:23:15.240 { 00:23:15.240 "subsystem": "keyring", 00:23:15.240 "config": [ 00:23:15.240 { 00:23:15.240 "method": "keyring_file_add_key", 00:23:15.240 "params": { 00:23:15.240 "name": "key0", 00:23:15.240 "path": "/tmp/tmp.CEomI34Ee9" 00:23:15.240 } 00:23:15.240 } 00:23:15.240 ] 00:23:15.240 }, 00:23:15.240 { 00:23:15.240 "subsystem": "iobuf", 00:23:15.240 "config": [ 00:23:15.240 { 00:23:15.240 "method": "iobuf_set_options", 00:23:15.240 "params": { 00:23:15.240 "small_pool_count": 8192, 00:23:15.240 "large_pool_count": 1024, 00:23:15.240 "small_bufsize": 8192, 00:23:15.240 "large_bufsize": 135168, 00:23:15.240 "enable_numa": false 00:23:15.240 } 00:23:15.240 } 00:23:15.240 ] 00:23:15.240 }, 00:23:15.240 { 00:23:15.240 "subsystem": "sock", 00:23:15.240 "config": [ 00:23:15.240 { 00:23:15.240 "method": "sock_set_default_impl", 00:23:15.240 "params": { 00:23:15.240 "impl_name": "posix" 00:23:15.240 } 00:23:15.240 }, 00:23:15.240 { 00:23:15.240 "method": "sock_impl_set_options", 00:23:15.240 "params": { 00:23:15.240 "impl_name": "ssl", 00:23:15.240 "recv_buf_size": 4096, 00:23:15.240 "send_buf_size": 4096, 00:23:15.240 "enable_recv_pipe": true, 00:23:15.240 "enable_quickack": false, 00:23:15.241 "enable_placement_id": 0, 00:23:15.241 "enable_zerocopy_send_server": true, 00:23:15.241 "enable_zerocopy_send_client": false, 00:23:15.241 "zerocopy_threshold": 0, 00:23:15.241 "tls_version": 0, 00:23:15.241 "enable_ktls": false 00:23:15.241 } 00:23:15.241 }, 00:23:15.241 { 00:23:15.241 "method": "sock_impl_set_options", 00:23:15.241 "params": { 00:23:15.241 "impl_name": "posix", 00:23:15.241 "recv_buf_size": 2097152, 00:23:15.241 "send_buf_size": 2097152, 00:23:15.241 "enable_recv_pipe": true, 00:23:15.241 "enable_quickack": false, 00:23:15.241 "enable_placement_id": 0, 00:23:15.241 "enable_zerocopy_send_server": true, 00:23:15.241 "enable_zerocopy_send_client": false, 00:23:15.241 "zerocopy_threshold": 0, 00:23:15.241 "tls_version": 0, 00:23:15.241 "enable_ktls": false 00:23:15.241 } 00:23:15.241 } 00:23:15.241 ] 00:23:15.241 }, 00:23:15.241 { 00:23:15.241 "subsystem": "vmd", 00:23:15.241 "config": [] 00:23:15.241 }, 00:23:15.241 { 00:23:15.241 "subsystem": "accel", 00:23:15.241 "config": [ 00:23:15.241 { 00:23:15.241 "method": "accel_set_options", 00:23:15.241 "params": { 00:23:15.241 "small_cache_size": 128, 00:23:15.241 "large_cache_size": 16, 00:23:15.241 "task_count": 2048, 00:23:15.241 "sequence_count": 2048, 00:23:15.241 "buf_count": 2048 00:23:15.241 } 00:23:15.241 } 00:23:15.241 ] 00:23:15.241 }, 00:23:15.241 { 00:23:15.241 "subsystem": "bdev", 00:23:15.241 "config": [ 00:23:15.241 { 00:23:15.241 "method": "bdev_set_options", 00:23:15.241 "params": { 00:23:15.241 "bdev_io_pool_size": 65535, 00:23:15.241 "bdev_io_cache_size": 256, 00:23:15.241 "bdev_auto_examine": true, 00:23:15.241 "iobuf_small_cache_size": 128, 00:23:15.241 "iobuf_large_cache_size": 16 00:23:15.241 } 00:23:15.241 }, 00:23:15.241 { 00:23:15.241 "method": "bdev_raid_set_options", 00:23:15.241 "params": { 00:23:15.241 "process_window_size_kb": 1024, 00:23:15.241 "process_max_bandwidth_mb_sec": 0 00:23:15.241 } 00:23:15.241 }, 00:23:15.241 { 00:23:15.241 "method": "bdev_iscsi_set_options", 00:23:15.241 "params": { 00:23:15.241 "timeout_sec": 30 00:23:15.241 } 00:23:15.241 }, 00:23:15.241 { 00:23:15.241 "method": "bdev_nvme_set_options", 00:23:15.241 "params": { 00:23:15.241 "action_on_timeout": "none", 00:23:15.241 "timeout_us": 0, 00:23:15.241 "timeout_admin_us": 0, 00:23:15.241 "keep_alive_timeout_ms": 10000, 00:23:15.241 "arbitration_burst": 0, 00:23:15.241 "low_priority_weight": 0, 00:23:15.241 "medium_priority_weight": 0, 00:23:15.241 "high_priority_weight": 0, 00:23:15.241 "nvme_adminq_poll_period_us": 10000, 00:23:15.241 "nvme_ioq_poll_period_us": 0, 00:23:15.241 "io_queue_requests": 512, 00:23:15.241 "delay_cmd_submit": true, 00:23:15.241 "transport_retry_count": 4, 00:23:15.241 "bdev_retry_count": 3, 00:23:15.241 "transport_ack_timeout": 0, 00:23:15.241 "ctrlr_loss_timeout_sec": 0, 00:23:15.241 "reconnect_delay_sec": 0, 00:23:15.241 "fast_io_fail_timeout_sec": 0, 00:23:15.241 "disable_auto_failback": false, 00:23:15.241 "generate_uuids": false, 00:23:15.241 "transport_tos": 0, 00:23:15.241 "nvme_error_stat": false, 00:23:15.241 "rdma_srq_size": 0, 00:23:15.241 "io_path_stat": false, 00:23:15.241 "allow_accel_sequence": false, 00:23:15.241 "rdma_max_cq_size": 0, 00:23:15.241 "rdma_cm_event_timeout_ms": 0, 00:23:15.241 "dhchap_digests": [ 00:23:15.241 "sha256", 00:23:15.241 "sha384", 00:23:15.241 "sha512" 00:23:15.241 ], 00:23:15.241 "dhchap_dhgroups": [ 00:23:15.241 "null", 00:23:15.241 "ffdhe2048", 00:23:15.241 "ffdhe3072", 00:23:15.241 "ffdhe4096", 00:23:15.241 "ffdhe6144", 00:23:15.241 "ffdhe8192" 00:23:15.241 ] 00:23:15.241 } 00:23:15.241 }, 00:23:15.241 { 00:23:15.241 "method": "bdev_nvme_attach_controller", 00:23:15.241 "params": { 00:23:15.241 "name": "TLSTEST", 00:23:15.241 "trtype": "TCP", 00:23:15.241 "adrfam": "IPv4", 00:23:15.241 "traddr": "10.0.0.2", 00:23:15.241 "trsvcid": "4420", 00:23:15.241 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.241 "prchk_reftag": false, 00:23:15.241 "prchk_guard": false, 00:23:15.241 "ctrlr_loss_timeout_sec": 0, 00:23:15.241 "reconnect_delay_sec": 0, 00:23:15.241 "fast_io_fail_timeout_sec": 0, 00:23:15.241 "psk": "key0", 00:23:15.241 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.241 "hdgst": false, 00:23:15.241 "ddgst": false, 00:23:15.241 "multipath": "multipath" 00:23:15.241 } 00:23:15.241 }, 00:23:15.241 { 00:23:15.241 "method": "bdev_nvme_set_hotplug", 00:23:15.241 "params": { 00:23:15.241 "period_us": 100000, 00:23:15.241 "enable": false 00:23:15.241 } 00:23:15.241 }, 00:23:15.241 { 00:23:15.241 "method": "bdev_wait_for_examine" 00:23:15.241 } 00:23:15.241 ] 00:23:15.241 }, 00:23:15.241 { 00:23:15.241 "subsystem": "nbd", 00:23:15.241 "config": [] 00:23:15.241 } 00:23:15.241 ] 00:23:15.241 }' 00:23:15.241 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1044324 00:23:15.241 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1044324 ']' 00:23:15.241 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1044324 00:23:15.241 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:15.241 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.241 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1044324 00:23:15.241 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:15.241 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:15.241 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1044324' 00:23:15.241 killing process with pid 1044324 00:23:15.241 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1044324 00:23:15.241 Received shutdown signal, test time was about 10.000000 seconds 00:23:15.241 00:23:15.241 Latency(us) 00:23:15.241 [2024-11-19T09:50:54.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.241 [2024-11-19T09:50:54.436Z] =================================================================================================================== 00:23:15.241 [2024-11-19T09:50:54.436Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:15.241 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1044324 00:23:15.241 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1043930 00:23:15.241 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1043930 ']' 00:23:15.241 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1043930 00:23:15.242 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:15.242 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.242 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1043930 00:23:15.242 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:15.242 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:15.242 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1043930' 00:23:15.242 killing process with pid 1043930 00:23:15.242 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1043930 00:23:15.242 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1043930 00:23:15.509 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:15.509 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:15.509 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:15.509 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.509 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:15.509 "subsystems": [ 00:23:15.509 { 00:23:15.509 "subsystem": "keyring", 00:23:15.509 "config": [ 00:23:15.509 { 00:23:15.509 "method": "keyring_file_add_key", 00:23:15.509 "params": { 00:23:15.509 "name": "key0", 00:23:15.509 "path": "/tmp/tmp.CEomI34Ee9" 00:23:15.509 } 00:23:15.509 } 00:23:15.509 ] 00:23:15.509 }, 00:23:15.509 { 00:23:15.509 "subsystem": "iobuf", 00:23:15.509 "config": [ 00:23:15.509 { 00:23:15.509 "method": "iobuf_set_options", 00:23:15.509 "params": { 00:23:15.509 "small_pool_count": 8192, 00:23:15.509 "large_pool_count": 1024, 00:23:15.509 "small_bufsize": 8192, 00:23:15.509 "large_bufsize": 135168, 00:23:15.509 "enable_numa": false 00:23:15.509 } 00:23:15.509 } 00:23:15.509 ] 00:23:15.509 }, 00:23:15.509 { 00:23:15.509 "subsystem": "sock", 00:23:15.509 "config": [ 00:23:15.509 { 00:23:15.509 "method": "sock_set_default_impl", 00:23:15.509 "params": { 00:23:15.509 "impl_name": "posix" 00:23:15.509 } 00:23:15.509 }, 00:23:15.509 { 00:23:15.509 "method": "sock_impl_set_options", 00:23:15.509 "params": { 00:23:15.509 "impl_name": "ssl", 00:23:15.509 "recv_buf_size": 4096, 00:23:15.509 "send_buf_size": 4096, 00:23:15.509 "enable_recv_pipe": true, 00:23:15.509 "enable_quickack": false, 00:23:15.509 "enable_placement_id": 0, 00:23:15.509 "enable_zerocopy_send_server": true, 00:23:15.509 "enable_zerocopy_send_client": false, 00:23:15.509 "zerocopy_threshold": 0, 00:23:15.509 "tls_version": 0, 00:23:15.509 "enable_ktls": false 00:23:15.509 } 00:23:15.509 }, 00:23:15.509 { 00:23:15.509 "method": "sock_impl_set_options", 00:23:15.509 "params": { 00:23:15.509 "impl_name": "posix", 00:23:15.509 "recv_buf_size": 2097152, 00:23:15.509 "send_buf_size": 2097152, 00:23:15.509 "enable_recv_pipe": true, 00:23:15.509 "enable_quickack": false, 00:23:15.509 "enable_placement_id": 0, 00:23:15.509 "enable_zerocopy_send_server": true, 00:23:15.509 "enable_zerocopy_send_client": false, 00:23:15.509 "zerocopy_threshold": 0, 00:23:15.509 "tls_version": 0, 00:23:15.509 "enable_ktls": false 00:23:15.509 } 00:23:15.509 } 00:23:15.509 ] 00:23:15.509 }, 00:23:15.509 { 00:23:15.509 "subsystem": "vmd", 00:23:15.509 "config": [] 00:23:15.509 }, 00:23:15.509 { 00:23:15.509 "subsystem": "accel", 00:23:15.509 "config": [ 00:23:15.509 { 00:23:15.509 "method": "accel_set_options", 00:23:15.509 "params": { 00:23:15.509 "small_cache_size": 128, 00:23:15.509 "large_cache_size": 16, 00:23:15.509 "task_count": 2048, 00:23:15.509 "sequence_count": 2048, 00:23:15.509 "buf_count": 2048 00:23:15.509 } 00:23:15.509 } 00:23:15.509 ] 00:23:15.509 }, 00:23:15.509 { 00:23:15.509 "subsystem": "bdev", 00:23:15.509 "config": [ 00:23:15.509 { 00:23:15.509 "method": "bdev_set_options", 00:23:15.509 "params": { 00:23:15.509 "bdev_io_pool_size": 65535, 00:23:15.509 "bdev_io_cache_size": 256, 00:23:15.509 "bdev_auto_examine": true, 00:23:15.509 "iobuf_small_cache_size": 128, 00:23:15.509 "iobuf_large_cache_size": 16 00:23:15.509 } 00:23:15.509 }, 00:23:15.509 { 00:23:15.509 "method": "bdev_raid_set_options", 00:23:15.509 "params": { 00:23:15.509 "process_window_size_kb": 1024, 00:23:15.509 "process_max_bandwidth_mb_sec": 0 00:23:15.509 } 00:23:15.509 }, 00:23:15.509 { 00:23:15.509 "method": "bdev_iscsi_set_options", 00:23:15.509 "params": { 00:23:15.509 "timeout_sec": 30 00:23:15.509 } 00:23:15.509 }, 00:23:15.509 { 00:23:15.509 "method": "bdev_nvme_set_options", 00:23:15.509 "params": { 00:23:15.509 "action_on_timeout": "none", 00:23:15.509 "timeout_us": 0, 00:23:15.509 "timeout_admin_us": 0, 00:23:15.509 "keep_alive_timeout_ms": 10000, 00:23:15.509 "arbitration_burst": 0, 00:23:15.509 "low_priority_weight": 0, 00:23:15.509 "medium_priority_weight": 0, 00:23:15.509 "high_priority_weight": 0, 00:23:15.509 "nvme_adminq_poll_period_us": 10000, 00:23:15.509 "nvme_ioq_poll_period_us": 0, 00:23:15.509 "io_queue_requests": 0, 00:23:15.509 "delay_cmd_submit": true, 00:23:15.509 "transport_retry_count": 4, 00:23:15.509 "bdev_retry_count": 3, 00:23:15.509 "transport_ack_timeout": 0, 00:23:15.509 "ctrlr_loss_timeout_sec": 0, 00:23:15.509 "reconnect_delay_sec": 0, 00:23:15.509 "fast_io_fail_timeout_sec": 0, 00:23:15.509 "disable_auto_failback": false, 00:23:15.509 "generate_uuids": false, 00:23:15.509 "transport_tos": 0, 00:23:15.509 "nvme_error_stat": false, 00:23:15.509 "rdma_srq_size": 0, 00:23:15.509 "io_path_stat": false, 00:23:15.509 "allow_accel_sequence": false, 00:23:15.509 "rdma_max_cq_size": 0, 00:23:15.509 "rdma_cm_event_timeout_ms": 0, 00:23:15.509 "dhchap_digests": [ 00:23:15.509 "sha256", 00:23:15.509 "sha384", 00:23:15.509 "sha512" 00:23:15.509 ], 00:23:15.509 "dhchap_dhgroups": [ 00:23:15.509 "null", 00:23:15.509 "ffdhe2048", 00:23:15.509 "ffdhe3072", 00:23:15.510 "ffdhe4096", 00:23:15.510 "ffdhe6144", 00:23:15.510 "ffdhe8192" 00:23:15.510 ] 00:23:15.510 } 00:23:15.510 }, 00:23:15.510 { 00:23:15.510 "method": "bdev_nvme_set_hotplug", 00:23:15.510 "params": { 00:23:15.510 "period_us": 100000, 00:23:15.510 "enable": false 00:23:15.510 } 00:23:15.510 }, 00:23:15.510 { 00:23:15.510 "method": "bdev_malloc_create", 00:23:15.510 "params": { 00:23:15.510 "name": "malloc0", 00:23:15.510 "num_blocks": 8192, 00:23:15.510 "block_size": 4096, 00:23:15.510 "physical_block_size": 4096, 00:23:15.510 "uuid": "d36db4f2-c0f6-434b-9daa-3ca4e1e30f05", 00:23:15.510 "optimal_io_boundary": 0, 00:23:15.510 "md_size": 0, 00:23:15.510 "dif_type": 0, 00:23:15.510 "dif_is_head_of_md": false, 00:23:15.510 "dif_pi_format": 0 00:23:15.510 } 00:23:15.510 }, 00:23:15.510 { 00:23:15.510 "method": "bdev_wait_for_examine" 00:23:15.510 } 00:23:15.510 ] 00:23:15.510 }, 00:23:15.510 { 00:23:15.510 "subsystem": "nbd", 00:23:15.510 "config": [] 00:23:15.510 }, 00:23:15.510 { 00:23:15.510 "subsystem": "scheduler", 00:23:15.510 "config": [ 00:23:15.510 { 00:23:15.510 "method": "framework_set_scheduler", 00:23:15.510 "params": { 00:23:15.510 "name": "static" 00:23:15.510 } 00:23:15.510 } 00:23:15.510 ] 00:23:15.510 }, 00:23:15.510 { 00:23:15.510 "subsystem": "nvmf", 00:23:15.510 "config": [ 00:23:15.510 { 00:23:15.510 "method": "nvmf_set_config", 00:23:15.510 "params": { 00:23:15.510 "discovery_filter": "match_any", 00:23:15.510 "admin_cmd_passthru": { 00:23:15.510 "identify_ctrlr": false 00:23:15.510 }, 00:23:15.510 "dhchap_digests": [ 00:23:15.510 "sha256", 00:23:15.510 "sha384", 00:23:15.510 "sha512" 00:23:15.510 ], 00:23:15.510 "dhchap_dhgroups": [ 00:23:15.510 "null", 00:23:15.510 "ffdhe2048", 00:23:15.510 "ffdhe3072", 00:23:15.510 "ffdhe4096", 00:23:15.510 "ffdhe6144", 00:23:15.510 "ffdhe8192" 00:23:15.510 ] 00:23:15.510 } 00:23:15.510 }, 00:23:15.510 { 00:23:15.510 "method": "nvmf_set_max_subsystems", 00:23:15.510 "params": { 00:23:15.510 "max_subsystems": 1024 00:23:15.510 } 00:23:15.510 }, 00:23:15.510 { 00:23:15.510 "method": "nvmf_set_crdt", 00:23:15.510 "params": { 00:23:15.510 "crdt1": 0, 00:23:15.510 "crdt2": 0, 00:23:15.510 "crdt3": 0 00:23:15.510 } 00:23:15.510 }, 00:23:15.510 { 00:23:15.510 "method": "nvmf_create_transport", 00:23:15.510 "params": { 00:23:15.510 "trtype": "TCP", 00:23:15.510 "max_queue_depth": 128, 00:23:15.510 "max_io_qpairs_per_ctrlr": 127, 00:23:15.510 "in_capsule_data_size": 4096, 00:23:15.510 "max_io_size": 131072, 00:23:15.510 "io_unit_size": 131072, 00:23:15.510 "max_aq_depth": 128, 00:23:15.510 "num_shared_buffers": 511, 00:23:15.510 "buf_cache_size": 4294967295, 00:23:15.510 "dif_insert_or_strip": false, 00:23:15.510 "zcopy": false, 00:23:15.510 "c2h_success": false, 00:23:15.510 "sock_priority": 0, 00:23:15.510 "abort_timeout_sec": 1, 00:23:15.510 "ack_timeout": 0, 00:23:15.510 "data_wr_pool_size": 0 00:23:15.510 } 00:23:15.510 }, 00:23:15.510 { 00:23:15.510 "method": "nvmf_create_subsystem", 00:23:15.510 "params": { 00:23:15.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.510 "allow_any_host": false, 00:23:15.510 "serial_number": "SPDK00000000000001", 00:23:15.510 "model_number": "SPDK bdev Controller", 00:23:15.510 "max_namespaces": 10, 00:23:15.510 "min_cntlid": 1, 00:23:15.510 "max_cntlid": 65519, 00:23:15.510 "ana_reporting": false 00:23:15.510 } 00:23:15.510 }, 00:23:15.510 { 00:23:15.510 "method": "nvmf_subsystem_add_host", 00:23:15.510 "params": { 00:23:15.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.510 "host": "nqn.2016-06.io.spdk:host1", 00:23:15.510 "psk": "key0" 00:23:15.510 } 00:23:15.510 }, 00:23:15.510 { 00:23:15.510 "method": "nvmf_subsystem_add_ns", 00:23:15.510 "params": { 00:23:15.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.510 "namespace": { 00:23:15.510 "nsid": 1, 00:23:15.510 "bdev_name": "malloc0", 00:23:15.510 "nguid": "D36DB4F2C0F6434B9DAA3CA4E1E30F05", 00:23:15.510 "uuid": "d36db4f2-c0f6-434b-9daa-3ca4e1e30f05", 00:23:15.510 "no_auto_visible": false 00:23:15.510 } 00:23:15.510 } 00:23:15.510 }, 00:23:15.510 { 00:23:15.510 "method": "nvmf_subsystem_add_listener", 00:23:15.510 "params": { 00:23:15.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.510 "listen_address": { 00:23:15.510 "trtype": "TCP", 00:23:15.510 "adrfam": "IPv4", 00:23:15.510 "traddr": "10.0.0.2", 00:23:15.510 "trsvcid": "4420" 00:23:15.510 }, 00:23:15.510 "secure_channel": true 00:23:15.510 } 00:23:15.510 } 00:23:15.510 ] 00:23:15.510 } 00:23:15.510 ] 00:23:15.510 }' 00:23:15.510 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1044682 00:23:15.510 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1044682 00:23:15.510 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:15.510 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1044682 ']' 00:23:15.510 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.510 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.510 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.510 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.510 10:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.510 [2024-11-19 10:50:54.586780] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:23:15.510 [2024-11-19 10:50:54.586832] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.510 [2024-11-19 10:50:54.677255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.827 [2024-11-19 10:50:54.707290] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.827 [2024-11-19 10:50:54.707321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.827 [2024-11-19 10:50:54.707327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.827 [2024-11-19 10:50:54.707332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.827 [2024-11-19 10:50:54.707336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.827 [2024-11-19 10:50:54.707813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.827 [2024-11-19 10:50:54.900067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.827 [2024-11-19 10:50:54.932086] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:15.827 [2024-11-19 10:50:54.932281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.521 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.521 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:16.521 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:16.521 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:16.521 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.521 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.521 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1044914 00:23:16.521 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1044914 /var/tmp/bdevperf.sock 00:23:16.521 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1044914 ']' 00:23:16.521 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.521 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:16.521 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.521 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:16.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.521 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:16.521 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.521 10:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:16.521 "subsystems": [ 00:23:16.521 { 00:23:16.521 "subsystem": "keyring", 00:23:16.521 "config": [ 00:23:16.521 { 00:23:16.521 "method": "keyring_file_add_key", 00:23:16.521 "params": { 00:23:16.521 "name": "key0", 00:23:16.521 "path": "/tmp/tmp.CEomI34Ee9" 00:23:16.521 } 00:23:16.521 } 00:23:16.521 ] 00:23:16.522 }, 00:23:16.522 { 00:23:16.522 "subsystem": "iobuf", 00:23:16.522 "config": [ 00:23:16.522 { 00:23:16.522 "method": "iobuf_set_options", 00:23:16.522 "params": { 00:23:16.522 "small_pool_count": 8192, 00:23:16.522 "large_pool_count": 1024, 00:23:16.522 "small_bufsize": 8192, 00:23:16.522 "large_bufsize": 135168, 00:23:16.522 "enable_numa": false 00:23:16.522 } 00:23:16.522 } 00:23:16.522 ] 00:23:16.522 }, 00:23:16.522 { 00:23:16.522 "subsystem": "sock", 00:23:16.522 "config": [ 00:23:16.522 { 00:23:16.522 "method": "sock_set_default_impl", 00:23:16.522 "params": { 00:23:16.522 "impl_name": "posix" 00:23:16.522 } 00:23:16.522 }, 00:23:16.522 { 00:23:16.522 "method": "sock_impl_set_options", 00:23:16.522 "params": { 00:23:16.522 "impl_name": "ssl", 00:23:16.522 "recv_buf_size": 4096, 00:23:16.522 "send_buf_size": 4096, 00:23:16.522 "enable_recv_pipe": true, 00:23:16.522 "enable_quickack": false, 00:23:16.522 "enable_placement_id": 0, 00:23:16.522 "enable_zerocopy_send_server": true, 00:23:16.522 "enable_zerocopy_send_client": false, 00:23:16.522 "zerocopy_threshold": 0, 00:23:16.522 "tls_version": 0, 00:23:16.522 "enable_ktls": false 00:23:16.522 } 00:23:16.522 }, 00:23:16.522 { 00:23:16.522 "method": "sock_impl_set_options", 00:23:16.522 "params": { 00:23:16.522 "impl_name": "posix", 00:23:16.522 "recv_buf_size": 2097152, 00:23:16.522 "send_buf_size": 2097152, 00:23:16.522 "enable_recv_pipe": true, 00:23:16.522 "enable_quickack": false, 00:23:16.522 "enable_placement_id": 0, 00:23:16.522 "enable_zerocopy_send_server": true, 00:23:16.522 "enable_zerocopy_send_client": false, 00:23:16.522 "zerocopy_threshold": 0, 00:23:16.522 "tls_version": 0, 00:23:16.522 "enable_ktls": false 00:23:16.522 } 00:23:16.522 } 00:23:16.522 ] 00:23:16.522 }, 00:23:16.522 { 00:23:16.522 "subsystem": "vmd", 00:23:16.522 "config": [] 00:23:16.522 }, 00:23:16.522 { 00:23:16.522 "subsystem": "accel", 00:23:16.522 "config": [ 00:23:16.522 { 00:23:16.522 "method": "accel_set_options", 00:23:16.522 "params": { 00:23:16.522 "small_cache_size": 128, 00:23:16.522 "large_cache_size": 16, 00:23:16.522 "task_count": 2048, 00:23:16.522 "sequence_count": 2048, 00:23:16.522 "buf_count": 2048 00:23:16.522 } 00:23:16.522 } 00:23:16.522 ] 00:23:16.522 }, 00:23:16.522 { 00:23:16.522 "subsystem": "bdev", 00:23:16.522 "config": [ 00:23:16.522 { 00:23:16.522 "method": "bdev_set_options", 00:23:16.522 "params": { 00:23:16.522 "bdev_io_pool_size": 65535, 00:23:16.522 "bdev_io_cache_size": 256, 00:23:16.522 "bdev_auto_examine": true, 00:23:16.522 "iobuf_small_cache_size": 128, 00:23:16.522 "iobuf_large_cache_size": 16 00:23:16.522 } 00:23:16.522 }, 00:23:16.522 { 00:23:16.522 "method": "bdev_raid_set_options", 00:23:16.522 "params": { 00:23:16.522 "process_window_size_kb": 1024, 00:23:16.522 "process_max_bandwidth_mb_sec": 0 00:23:16.522 } 00:23:16.522 }, 00:23:16.522 { 00:23:16.522 "method": "bdev_iscsi_set_options", 00:23:16.522 "params": { 00:23:16.522 "timeout_sec": 30 00:23:16.522 } 00:23:16.522 }, 00:23:16.522 { 00:23:16.522 "method": "bdev_nvme_set_options", 00:23:16.522 "params": { 00:23:16.522 "action_on_timeout": "none", 00:23:16.522 "timeout_us": 0, 00:23:16.522 "timeout_admin_us": 0, 00:23:16.522 "keep_alive_timeout_ms": 10000, 00:23:16.522 "arbitration_burst": 0, 00:23:16.522 "low_priority_weight": 0, 00:23:16.522 "medium_priority_weight": 0, 00:23:16.522 "high_priority_weight": 0, 00:23:16.522 "nvme_adminq_poll_period_us": 10000, 00:23:16.522 "nvme_ioq_poll_period_us": 0, 00:23:16.522 "io_queue_requests": 512, 00:23:16.522 "delay_cmd_submit": true, 00:23:16.522 "transport_retry_count": 4, 00:23:16.522 "bdev_retry_count": 3, 00:23:16.522 "transport_ack_timeout": 0, 00:23:16.522 "ctrlr_loss_timeout_sec": 0, 00:23:16.522 "reconnect_delay_sec": 0, 00:23:16.522 "fast_io_fail_timeout_sec": 0, 00:23:16.522 "disable_auto_failback": false, 00:23:16.522 "generate_uuids": false, 00:23:16.522 "transport_tos": 0, 00:23:16.522 "nvme_error_stat": false, 00:23:16.522 "rdma_srq_size": 0, 00:23:16.522 "io_path_stat": false, 00:23:16.522 "allow_accel_sequence": false, 00:23:16.522 "rdma_max_cq_size": 0, 00:23:16.522 "rdma_cm_event_timeout_ms": 0, 00:23:16.522 "dhchap_digests": [ 00:23:16.522 "sha256", 00:23:16.522 "sha384", 00:23:16.522 "sha512" 00:23:16.522 ], 00:23:16.522 "dhchap_dhgroups": [ 00:23:16.522 "null", 00:23:16.522 "ffdhe2048", 00:23:16.522 "ffdhe3072", 00:23:16.522 "ffdhe4096", 00:23:16.522 "ffdhe6144", 00:23:16.522 "ffdhe8192" 00:23:16.522 ] 00:23:16.522 } 00:23:16.522 }, 00:23:16.522 { 00:23:16.522 "method": "bdev_nvme_attach_controller", 00:23:16.522 "params": { 00:23:16.522 "name": "TLSTEST", 00:23:16.522 "trtype": "TCP", 00:23:16.522 "adrfam": "IPv4", 00:23:16.522 "traddr": "10.0.0.2", 00:23:16.522 "trsvcid": "4420", 00:23:16.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.522 "prchk_reftag": false, 00:23:16.522 "prchk_guard": false, 00:23:16.522 "ctrlr_loss_timeout_sec": 0, 00:23:16.522 "reconnect_delay_sec": 0, 00:23:16.522 "fast_io_fail_timeout_sec": 0, 00:23:16.522 "psk": "key0", 00:23:16.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.522 "hdgst": false, 00:23:16.522 "ddgst": false, 00:23:16.522 "multipath": "multipath" 00:23:16.522 } 00:23:16.522 }, 00:23:16.522 { 00:23:16.522 "method": "bdev_nvme_set_hotplug", 00:23:16.522 "params": { 00:23:16.522 "period_us": 100000, 00:23:16.522 "enable": false 00:23:16.522 } 00:23:16.522 }, 00:23:16.522 { 00:23:16.522 "method": "bdev_wait_for_examine" 00:23:16.522 } 00:23:16.522 ] 00:23:16.522 }, 00:23:16.522 { 00:23:16.522 "subsystem": "nbd", 00:23:16.522 "config": [] 00:23:16.522 } 00:23:16.522 ] 00:23:16.522 }' 00:23:16.522 [2024-11-19 10:50:55.482904] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:23:16.522 [2024-11-19 10:50:55.482956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044914 ] 00:23:16.522 [2024-11-19 10:50:55.564168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.522 [2024-11-19 10:50:55.593426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.805 [2024-11-19 10:50:55.727530] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:17.377 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.377 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:17.377 10:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:17.377 Running I/O for 10 seconds... 00:23:19.261 5216.00 IOPS, 20.38 MiB/s [2024-11-19T09:50:59.410Z] 5486.50 IOPS, 21.43 MiB/s [2024-11-19T09:51:00.795Z] 5623.00 IOPS, 21.96 MiB/s [2024-11-19T09:51:01.366Z] 5596.00 IOPS, 21.86 MiB/s [2024-11-19T09:51:02.749Z] 5500.40 IOPS, 21.49 MiB/s [2024-11-19T09:51:03.690Z] 5498.33 IOPS, 21.48 MiB/s [2024-11-19T09:51:04.631Z] 5596.71 IOPS, 21.86 MiB/s [2024-11-19T09:51:05.573Z] 5661.88 IOPS, 22.12 MiB/s [2024-11-19T09:51:06.515Z] 5627.11 IOPS, 21.98 MiB/s [2024-11-19T09:51:06.515Z] 5690.10 IOPS, 22.23 MiB/s 00:23:27.320 Latency(us) 00:23:27.320 [2024-11-19T09:51:06.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.320 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:27.320 Verification LBA range: start 0x0 length 0x2000 00:23:27.320 TLSTESTn1 : 10.02 5693.16 22.24 0.00 0.00 22450.29 5106.35 23811.41 00:23:27.320 [2024-11-19T09:51:06.515Z] =================================================================================================================== 00:23:27.320 [2024-11-19T09:51:06.515Z] Total : 5693.16 22.24 0.00 0.00 22450.29 5106.35 23811.41 00:23:27.320 { 00:23:27.320 "results": [ 00:23:27.320 { 00:23:27.320 "job": "TLSTESTn1", 00:23:27.320 "core_mask": "0x4", 00:23:27.320 "workload": "verify", 00:23:27.320 "status": "finished", 00:23:27.320 "verify_range": { 00:23:27.320 "start": 0, 00:23:27.320 "length": 8192 00:23:27.320 }, 00:23:27.320 "queue_depth": 128, 00:23:27.320 "io_size": 4096, 00:23:27.320 "runtime": 10.016765, 00:23:27.320 "iops": 5693.155424930104, 00:23:27.320 "mibps": 22.23888837863322, 00:23:27.320 "io_failed": 0, 00:23:27.320 "io_timeout": 0, 00:23:27.320 "avg_latency_us": 22450.286826941625, 00:23:27.320 "min_latency_us": 5106.346666666666, 00:23:27.320 "max_latency_us": 23811.413333333334 00:23:27.320 } 00:23:27.320 ], 00:23:27.320 "core_count": 1 00:23:27.320 } 00:23:27.320 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:27.320 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1044914 00:23:27.320 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1044914 ']' 00:23:27.320 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1044914 00:23:27.320 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:27.320 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.320 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1044914 00:23:27.320 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:27.320 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:27.320 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1044914' 00:23:27.320 killing process with pid 1044914 00:23:27.320 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1044914 00:23:27.320 Received shutdown signal, test time was about 10.000000 seconds 00:23:27.320 00:23:27.320 Latency(us) 00:23:27.320 [2024-11-19T09:51:06.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.320 [2024-11-19T09:51:06.515Z] =================================================================================================================== 00:23:27.320 [2024-11-19T09:51:06.515Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.320 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1044914 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1044682 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1044682 ']' 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1044682 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1044682 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1044682' 00:23:27.582 killing process with pid 1044682 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1044682 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1044682 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1047170 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1047170 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1047170 ']' 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.582 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.843 [2024-11-19 10:51:06.817013] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:23:27.843 [2024-11-19 10:51:06.817076] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.843 [2024-11-19 10:51:06.913529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.843 [2024-11-19 10:51:06.963650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.843 [2024-11-19 10:51:06.963703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.843 [2024-11-19 10:51:06.963711] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.843 [2024-11-19 10:51:06.963718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.843 [2024-11-19 10:51:06.963725] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.843 [2024-11-19 10:51:06.964527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.785 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.785 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:28.785 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:28.785 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:28.785 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.785 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.785 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.CEomI34Ee9 00:23:28.785 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CEomI34Ee9 00:23:28.785 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:28.785 [2024-11-19 10:51:07.827309] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.785 10:51:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:29.046 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:29.046 [2024-11-19 10:51:08.224308] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:29.046 [2024-11-19 10:51:08.224631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.306 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:29.307 malloc0 00:23:29.307 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:29.567 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CEomI34Ee9 00:23:29.828 10:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:30.088 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1047565 00:23:30.088 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:30.088 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.088 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1047565 /var/tmp/bdevperf.sock 00:23:30.088 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1047565 ']' 00:23:30.088 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.088 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:30.088 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.088 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:30.088 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.088 [2024-11-19 10:51:09.089186] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:23:30.088 [2024-11-19 10:51:09.089263] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047565 ] 00:23:30.088 [2024-11-19 10:51:09.177912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.088 [2024-11-19 10:51:09.213889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.028 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.028 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:31.028 10:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CEomI34Ee9 00:23:31.028 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:31.289 [2024-11-19 10:51:10.248939] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.289 nvme0n1 00:23:31.289 10:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:31.289 Running I/O for 1 seconds... 00:23:32.673 4393.00 IOPS, 17.16 MiB/s 00:23:32.673 Latency(us) 00:23:32.673 [2024-11-19T09:51:11.868Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.673 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:32.673 Verification LBA range: start 0x0 length 0x2000 00:23:32.673 nvme0n1 : 1.01 4458.51 17.42 0.00 0.00 28542.18 6225.92 73837.23 00:23:32.673 [2024-11-19T09:51:11.868Z] =================================================================================================================== 00:23:32.673 [2024-11-19T09:51:11.868Z] Total : 4458.51 17.42 0.00 0.00 28542.18 6225.92 73837.23 00:23:32.673 { 00:23:32.673 "results": [ 00:23:32.673 { 00:23:32.673 "job": "nvme0n1", 00:23:32.673 "core_mask": "0x2", 00:23:32.673 "workload": "verify", 00:23:32.673 "status": "finished", 00:23:32.673 "verify_range": { 00:23:32.673 "start": 0, 00:23:32.673 "length": 8192 00:23:32.673 }, 00:23:32.673 "queue_depth": 128, 00:23:32.673 "io_size": 4096, 00:23:32.673 "runtime": 1.014015, 00:23:32.673 "iops": 4458.513927308768, 00:23:32.673 "mibps": 17.416070028549875, 00:23:32.673 "io_failed": 0, 00:23:32.673 "io_timeout": 0, 00:23:32.673 "avg_latency_us": 28542.180084052205, 00:23:32.673 "min_latency_us": 6225.92, 00:23:32.673 "max_latency_us": 73837.22666666667 00:23:32.673 } 00:23:32.673 ], 00:23:32.673 "core_count": 1 00:23:32.673 } 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1047565 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1047565 ']' 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1047565 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1047565 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1047565' 00:23:32.673 killing process with pid 1047565 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1047565 00:23:32.673 Received shutdown signal, test time was about 1.000000 seconds 00:23:32.673 00:23:32.673 Latency(us) 00:23:32.673 [2024-11-19T09:51:11.868Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.673 [2024-11-19T09:51:11.868Z] =================================================================================================================== 00:23:32.673 [2024-11-19T09:51:11.868Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1047565 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1047170 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1047170 ']' 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1047170 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1047170 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1047170' 00:23:32.673 killing process with pid 1047170 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1047170 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1047170 00:23:32.673 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:32.674 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:32.674 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:32.674 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.674 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1048677 00:23:32.674 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1048677 00:23:32.674 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:32.674 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1048677 ']' 00:23:32.674 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.674 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.674 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.674 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.674 10:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.674 [2024-11-19 10:51:11.863625] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:23:32.674 [2024-11-19 10:51:11.863679] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.932 [2024-11-19 10:51:11.957072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.932 [2024-11-19 10:51:11.991487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.932 [2024-11-19 10:51:11.991524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.932 [2024-11-19 10:51:11.991532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.932 [2024-11-19 10:51:11.991539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.932 [2024-11-19 10:51:11.991545] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.932 [2024-11-19 10:51:11.992151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.503 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.503 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:33.503 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:33.503 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.503 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.765 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.765 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:33.765 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.765 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.765 [2024-11-19 10:51:12.713280] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.765 malloc0 00:23:33.765 [2024-11-19 10:51:12.743429] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:33.765 [2024-11-19 10:51:12.743750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.765 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.765 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1048850 00:23:33.765 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1048850 /var/tmp/bdevperf.sock 00:23:33.765 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:33.765 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1048850 ']' 00:23:33.765 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.765 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.765 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.765 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.765 10:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.765 [2024-11-19 10:51:12.826627] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:23:33.765 [2024-11-19 10:51:12.826692] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1048850 ] 00:23:33.765 [2024-11-19 10:51:12.913566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.765 [2024-11-19 10:51:12.948135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.713 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.713 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:34.713 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CEomI34Ee9 00:23:34.713 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:34.973 [2024-11-19 10:51:13.930261] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.973 nvme0n1 00:23:34.974 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:34.974 Running I/O for 1 seconds... 00:23:36.358 4632.00 IOPS, 18.09 MiB/s 00:23:36.358 Latency(us) 00:23:36.358 [2024-11-19T09:51:15.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.358 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:36.358 Verification LBA range: start 0x0 length 0x2000 00:23:36.358 nvme0n1 : 1.01 4705.74 18.38 0.00 0.00 27042.02 4805.97 29709.65 00:23:36.358 [2024-11-19T09:51:15.553Z] =================================================================================================================== 00:23:36.358 [2024-11-19T09:51:15.553Z] Total : 4705.74 18.38 0.00 0.00 27042.02 4805.97 29709.65 00:23:36.358 { 00:23:36.358 "results": [ 00:23:36.358 { 00:23:36.358 "job": "nvme0n1", 00:23:36.358 "core_mask": "0x2", 00:23:36.358 "workload": "verify", 00:23:36.358 "status": "finished", 00:23:36.358 "verify_range": { 00:23:36.358 "start": 0, 00:23:36.358 "length": 8192 00:23:36.358 }, 00:23:36.358 "queue_depth": 128, 00:23:36.358 "io_size": 4096, 00:23:36.358 "runtime": 1.011744, 00:23:36.358 "iops": 4705.73583831483, 00:23:36.358 "mibps": 18.381780618417306, 00:23:36.358 "io_failed": 0, 00:23:36.358 "io_timeout": 0, 00:23:36.358 "avg_latency_us": 27042.02256668767, 00:23:36.358 "min_latency_us": 4805.973333333333, 00:23:36.358 "max_latency_us": 29709.653333333332 00:23:36.358 } 00:23:36.358 ], 00:23:36.358 "core_count": 1 00:23:36.358 } 00:23:36.358 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:36.358 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.358 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.358 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.358 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:36.358 "subsystems": [ 00:23:36.358 { 00:23:36.358 "subsystem": "keyring", 00:23:36.358 "config": [ 00:23:36.358 { 00:23:36.358 "method": "keyring_file_add_key", 00:23:36.358 "params": { 00:23:36.358 "name": "key0", 00:23:36.358 "path": "/tmp/tmp.CEomI34Ee9" 00:23:36.358 } 00:23:36.358 } 00:23:36.358 ] 00:23:36.358 }, 00:23:36.358 { 00:23:36.358 "subsystem": "iobuf", 00:23:36.358 "config": [ 00:23:36.358 { 00:23:36.358 "method": "iobuf_set_options", 00:23:36.358 "params": { 00:23:36.358 "small_pool_count": 8192, 00:23:36.358 "large_pool_count": 1024, 00:23:36.358 "small_bufsize": 8192, 00:23:36.358 "large_bufsize": 135168, 00:23:36.358 "enable_numa": false 00:23:36.358 } 00:23:36.358 } 00:23:36.358 ] 00:23:36.358 }, 00:23:36.358 { 00:23:36.358 "subsystem": "sock", 00:23:36.358 "config": [ 00:23:36.358 { 00:23:36.358 "method": "sock_set_default_impl", 00:23:36.358 "params": { 00:23:36.358 "impl_name": "posix" 00:23:36.358 } 00:23:36.358 }, 00:23:36.358 { 00:23:36.358 "method": "sock_impl_set_options", 00:23:36.358 "params": { 00:23:36.358 "impl_name": "ssl", 00:23:36.358 "recv_buf_size": 4096, 00:23:36.358 "send_buf_size": 4096, 00:23:36.358 "enable_recv_pipe": true, 00:23:36.358 "enable_quickack": false, 00:23:36.358 "enable_placement_id": 0, 00:23:36.358 "enable_zerocopy_send_server": true, 00:23:36.358 "enable_zerocopy_send_client": false, 00:23:36.358 "zerocopy_threshold": 0, 00:23:36.358 "tls_version": 0, 00:23:36.358 "enable_ktls": false 00:23:36.358 } 00:23:36.358 }, 00:23:36.359 { 00:23:36.359 "method": "sock_impl_set_options", 00:23:36.359 "params": { 00:23:36.359 "impl_name": "posix", 00:23:36.359 "recv_buf_size": 2097152, 00:23:36.359 "send_buf_size": 2097152, 00:23:36.359 "enable_recv_pipe": true, 00:23:36.359 "enable_quickack": false, 00:23:36.359 "enable_placement_id": 0, 00:23:36.359 "enable_zerocopy_send_server": true, 00:23:36.359 "enable_zerocopy_send_client": false, 00:23:36.359 "zerocopy_threshold": 0, 00:23:36.359 "tls_version": 0, 00:23:36.359 "enable_ktls": false 00:23:36.359 } 00:23:36.359 } 00:23:36.359 ] 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "subsystem": "vmd", 00:23:36.359 "config": [] 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "subsystem": "accel", 00:23:36.359 "config": [ 00:23:36.359 { 00:23:36.359 "method": "accel_set_options", 00:23:36.359 "params": { 00:23:36.359 "small_cache_size": 128, 00:23:36.359 "large_cache_size": 16, 00:23:36.359 "task_count": 2048, 00:23:36.359 "sequence_count": 2048, 00:23:36.359 "buf_count": 2048 00:23:36.359 } 00:23:36.359 } 00:23:36.359 ] 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "subsystem": "bdev", 00:23:36.359 "config": [ 00:23:36.359 { 00:23:36.359 "method": "bdev_set_options", 00:23:36.359 "params": { 00:23:36.359 "bdev_io_pool_size": 65535, 00:23:36.359 "bdev_io_cache_size": 256, 00:23:36.359 "bdev_auto_examine": true, 00:23:36.359 "iobuf_small_cache_size": 128, 00:23:36.359 "iobuf_large_cache_size": 16 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "bdev_raid_set_options", 00:23:36.359 "params": { 00:23:36.359 "process_window_size_kb": 1024, 00:23:36.359 "process_max_bandwidth_mb_sec": 0 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "bdev_iscsi_set_options", 00:23:36.359 "params": { 00:23:36.359 "timeout_sec": 30 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "bdev_nvme_set_options", 00:23:36.359 "params": { 00:23:36.359 "action_on_timeout": "none", 00:23:36.359 "timeout_us": 0, 00:23:36.359 "timeout_admin_us": 0, 00:23:36.359 "keep_alive_timeout_ms": 10000, 00:23:36.359 "arbitration_burst": 0, 00:23:36.359 "low_priority_weight": 0, 00:23:36.359 "medium_priority_weight": 0, 00:23:36.359 "high_priority_weight": 0, 00:23:36.359 "nvme_adminq_poll_period_us": 10000, 00:23:36.359 "nvme_ioq_poll_period_us": 0, 00:23:36.359 "io_queue_requests": 0, 00:23:36.359 "delay_cmd_submit": true, 00:23:36.359 "transport_retry_count": 4, 00:23:36.359 "bdev_retry_count": 3, 00:23:36.359 "transport_ack_timeout": 0, 00:23:36.359 "ctrlr_loss_timeout_sec": 0, 00:23:36.359 "reconnect_delay_sec": 0, 00:23:36.359 "fast_io_fail_timeout_sec": 0, 00:23:36.359 "disable_auto_failback": false, 00:23:36.359 "generate_uuids": false, 00:23:36.359 "transport_tos": 0, 00:23:36.359 "nvme_error_stat": false, 00:23:36.359 "rdma_srq_size": 0, 00:23:36.359 "io_path_stat": false, 00:23:36.359 "allow_accel_sequence": false, 00:23:36.359 "rdma_max_cq_size": 0, 00:23:36.359 "rdma_cm_event_timeout_ms": 0, 00:23:36.359 "dhchap_digests": [ 00:23:36.359 "sha256", 00:23:36.359 "sha384", 00:23:36.359 "sha512" 00:23:36.359 ], 00:23:36.359 "dhchap_dhgroups": [ 00:23:36.359 "null", 00:23:36.359 "ffdhe2048", 00:23:36.359 "ffdhe3072", 00:23:36.359 "ffdhe4096", 00:23:36.359 "ffdhe6144", 00:23:36.359 "ffdhe8192" 00:23:36.359 ] 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "bdev_nvme_set_hotplug", 00:23:36.359 "params": { 00:23:36.359 "period_us": 100000, 00:23:36.359 "enable": false 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "bdev_malloc_create", 00:23:36.359 "params": { 00:23:36.359 "name": "malloc0", 00:23:36.359 "num_blocks": 8192, 00:23:36.359 "block_size": 4096, 00:23:36.359 "physical_block_size": 4096, 00:23:36.359 "uuid": "3a5d36cb-33dc-4bf5-bf3a-8ad9c7709cc7", 00:23:36.359 "optimal_io_boundary": 0, 00:23:36.359 "md_size": 0, 00:23:36.359 "dif_type": 0, 00:23:36.359 "dif_is_head_of_md": false, 00:23:36.359 "dif_pi_format": 0 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "bdev_wait_for_examine" 00:23:36.359 } 00:23:36.359 ] 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "subsystem": "nbd", 00:23:36.359 "config": [] 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "subsystem": "scheduler", 00:23:36.359 "config": [ 00:23:36.359 { 00:23:36.359 "method": "framework_set_scheduler", 00:23:36.359 "params": { 00:23:36.359 "name": "static" 00:23:36.359 } 00:23:36.359 } 00:23:36.359 ] 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "subsystem": "nvmf", 00:23:36.359 "config": [ 00:23:36.359 { 00:23:36.359 "method": "nvmf_set_config", 00:23:36.359 "params": { 00:23:36.359 "discovery_filter": "match_any", 00:23:36.359 "admin_cmd_passthru": { 00:23:36.359 "identify_ctrlr": false 00:23:36.359 }, 00:23:36.359 "dhchap_digests": [ 00:23:36.359 "sha256", 00:23:36.359 "sha384", 00:23:36.359 "sha512" 00:23:36.359 ], 00:23:36.359 "dhchap_dhgroups": [ 00:23:36.359 "null", 00:23:36.359 "ffdhe2048", 00:23:36.359 "ffdhe3072", 00:23:36.359 "ffdhe4096", 00:23:36.359 "ffdhe6144", 00:23:36.359 "ffdhe8192" 00:23:36.359 ] 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "nvmf_set_max_subsystems", 00:23:36.359 "params": { 00:23:36.359 "max_subsystems": 1024 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "nvmf_set_crdt", 00:23:36.359 "params": { 00:23:36.359 "crdt1": 0, 00:23:36.359 "crdt2": 0, 00:23:36.359 "crdt3": 0 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "nvmf_create_transport", 00:23:36.359 "params": { 00:23:36.359 "trtype": "TCP", 00:23:36.359 "max_queue_depth": 128, 00:23:36.359 "max_io_qpairs_per_ctrlr": 127, 00:23:36.359 "in_capsule_data_size": 4096, 00:23:36.359 "max_io_size": 131072, 00:23:36.359 "io_unit_size": 131072, 00:23:36.359 "max_aq_depth": 128, 00:23:36.359 "num_shared_buffers": 511, 00:23:36.359 "buf_cache_size": 4294967295, 00:23:36.359 "dif_insert_or_strip": false, 00:23:36.359 "zcopy": false, 00:23:36.359 "c2h_success": false, 00:23:36.359 "sock_priority": 0, 00:23:36.359 "abort_timeout_sec": 1, 00:23:36.359 "ack_timeout": 0, 00:23:36.359 "data_wr_pool_size": 0 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "nvmf_create_subsystem", 00:23:36.359 "params": { 00:23:36.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.359 "allow_any_host": false, 00:23:36.359 "serial_number": "00000000000000000000", 00:23:36.359 "model_number": "SPDK bdev Controller", 00:23:36.359 "max_namespaces": 32, 00:23:36.359 "min_cntlid": 1, 00:23:36.359 "max_cntlid": 65519, 00:23:36.359 "ana_reporting": false 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "nvmf_subsystem_add_host", 00:23:36.359 "params": { 00:23:36.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.359 "host": "nqn.2016-06.io.spdk:host1", 00:23:36.359 "psk": "key0" 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "nvmf_subsystem_add_ns", 00:23:36.359 "params": { 00:23:36.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.359 "namespace": { 00:23:36.359 "nsid": 1, 00:23:36.359 "bdev_name": "malloc0", 00:23:36.359 "nguid": "3A5D36CB33DC4BF5BF3A8AD9C7709CC7", 00:23:36.359 "uuid": "3a5d36cb-33dc-4bf5-bf3a-8ad9c7709cc7", 00:23:36.359 "no_auto_visible": false 00:23:36.359 } 00:23:36.359 } 00:23:36.359 }, 00:23:36.359 { 00:23:36.359 "method": "nvmf_subsystem_add_listener", 00:23:36.359 "params": { 00:23:36.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.359 "listen_address": { 00:23:36.359 "trtype": "TCP", 00:23:36.359 "adrfam": "IPv4", 00:23:36.359 "traddr": "10.0.0.2", 00:23:36.359 "trsvcid": "4420" 00:23:36.359 }, 00:23:36.359 "secure_channel": false, 00:23:36.359 "sock_impl": "ssl" 00:23:36.359 } 00:23:36.359 } 00:23:36.359 ] 00:23:36.359 } 00:23:36.359 ] 00:23:36.359 }' 00:23:36.359 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:36.359 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:36.359 "subsystems": [ 00:23:36.359 { 00:23:36.359 "subsystem": "keyring", 00:23:36.359 "config": [ 00:23:36.359 { 00:23:36.359 "method": "keyring_file_add_key", 00:23:36.359 "params": { 00:23:36.359 "name": "key0", 00:23:36.359 "path": "/tmp/tmp.CEomI34Ee9" 00:23:36.359 } 00:23:36.359 } 00:23:36.359 ] 00:23:36.360 }, 00:23:36.360 { 00:23:36.360 "subsystem": "iobuf", 00:23:36.360 "config": [ 00:23:36.360 { 00:23:36.360 "method": "iobuf_set_options", 00:23:36.360 "params": { 00:23:36.360 "small_pool_count": 8192, 00:23:36.360 "large_pool_count": 1024, 00:23:36.360 "small_bufsize": 8192, 00:23:36.360 "large_bufsize": 135168, 00:23:36.360 "enable_numa": false 00:23:36.360 } 00:23:36.360 } 00:23:36.360 ] 00:23:36.360 }, 00:23:36.360 { 00:23:36.360 "subsystem": "sock", 00:23:36.360 "config": [ 00:23:36.360 { 00:23:36.360 "method": "sock_set_default_impl", 00:23:36.360 "params": { 00:23:36.360 "impl_name": "posix" 00:23:36.360 } 00:23:36.360 }, 00:23:36.360 { 00:23:36.360 "method": "sock_impl_set_options", 00:23:36.360 "params": { 00:23:36.360 "impl_name": "ssl", 00:23:36.360 "recv_buf_size": 4096, 00:23:36.360 "send_buf_size": 4096, 00:23:36.360 "enable_recv_pipe": true, 00:23:36.360 "enable_quickack": false, 00:23:36.360 "enable_placement_id": 0, 00:23:36.360 "enable_zerocopy_send_server": true, 00:23:36.360 "enable_zerocopy_send_client": false, 00:23:36.360 "zerocopy_threshold": 0, 00:23:36.360 "tls_version": 0, 00:23:36.360 "enable_ktls": false 00:23:36.360 } 00:23:36.360 }, 00:23:36.360 { 00:23:36.360 "method": "sock_impl_set_options", 00:23:36.360 "params": { 00:23:36.360 "impl_name": "posix", 00:23:36.360 "recv_buf_size": 2097152, 00:23:36.360 "send_buf_size": 2097152, 00:23:36.360 "enable_recv_pipe": true, 00:23:36.360 "enable_quickack": false, 00:23:36.360 "enable_placement_id": 0, 00:23:36.360 "enable_zerocopy_send_server": true, 00:23:36.360 "enable_zerocopy_send_client": false, 00:23:36.360 "zerocopy_threshold": 0, 00:23:36.360 "tls_version": 0, 00:23:36.360 "enable_ktls": false 00:23:36.360 } 00:23:36.360 } 00:23:36.360 ] 00:23:36.360 }, 00:23:36.360 { 00:23:36.360 "subsystem": "vmd", 00:23:36.360 "config": [] 00:23:36.360 }, 00:23:36.360 { 00:23:36.360 "subsystem": "accel", 00:23:36.360 "config": [ 00:23:36.360 { 00:23:36.360 "method": "accel_set_options", 00:23:36.360 "params": { 00:23:36.360 "small_cache_size": 128, 00:23:36.360 "large_cache_size": 16, 00:23:36.360 "task_count": 2048, 00:23:36.360 "sequence_count": 2048, 00:23:36.360 "buf_count": 2048 00:23:36.360 } 00:23:36.360 } 00:23:36.360 ] 00:23:36.360 }, 00:23:36.360 { 00:23:36.360 "subsystem": "bdev", 00:23:36.360 "config": [ 00:23:36.360 { 00:23:36.360 "method": "bdev_set_options", 00:23:36.360 "params": { 00:23:36.360 "bdev_io_pool_size": 65535, 00:23:36.360 "bdev_io_cache_size": 256, 00:23:36.360 "bdev_auto_examine": true, 00:23:36.360 "iobuf_small_cache_size": 128, 00:23:36.360 "iobuf_large_cache_size": 16 00:23:36.360 } 00:23:36.360 }, 00:23:36.360 { 00:23:36.360 "method": "bdev_raid_set_options", 00:23:36.360 "params": { 00:23:36.360 "process_window_size_kb": 1024, 00:23:36.360 "process_max_bandwidth_mb_sec": 0 00:23:36.360 } 00:23:36.360 }, 00:23:36.360 { 00:23:36.360 "method": "bdev_iscsi_set_options", 00:23:36.360 "params": { 00:23:36.360 "timeout_sec": 30 00:23:36.360 } 00:23:36.360 }, 00:23:36.360 { 00:23:36.360 "method": "bdev_nvme_set_options", 00:23:36.360 "params": { 00:23:36.360 "action_on_timeout": "none", 00:23:36.360 "timeout_us": 0, 00:23:36.360 "timeout_admin_us": 0, 00:23:36.360 "keep_alive_timeout_ms": 10000, 00:23:36.360 "arbitration_burst": 0, 00:23:36.360 "low_priority_weight": 0, 00:23:36.360 "medium_priority_weight": 0, 00:23:36.360 "high_priority_weight": 0, 00:23:36.360 "nvme_adminq_poll_period_us": 10000, 00:23:36.360 "nvme_ioq_poll_period_us": 0, 00:23:36.360 "io_queue_requests": 512, 00:23:36.360 "delay_cmd_submit": true, 00:23:36.360 "transport_retry_count": 4, 00:23:36.360 "bdev_retry_count": 3, 00:23:36.360 "transport_ack_timeout": 0, 00:23:36.360 "ctrlr_loss_timeout_sec": 0, 00:23:36.360 "reconnect_delay_sec": 0, 00:23:36.360 "fast_io_fail_timeout_sec": 0, 00:23:36.360 "disable_auto_failback": false, 00:23:36.360 "generate_uuids": false, 00:23:36.360 "transport_tos": 0, 00:23:36.360 "nvme_error_stat": false, 00:23:36.360 "rdma_srq_size": 0, 00:23:36.360 "io_path_stat": false, 00:23:36.360 "allow_accel_sequence": false, 00:23:36.360 "rdma_max_cq_size": 0, 00:23:36.360 "rdma_cm_event_timeout_ms": 0, 00:23:36.360 "dhchap_digests": [ 00:23:36.360 "sha256", 00:23:36.360 "sha384", 00:23:36.360 "sha512" 00:23:36.360 ], 00:23:36.360 "dhchap_dhgroups": [ 00:23:36.360 "null", 00:23:36.360 "ffdhe2048", 00:23:36.360 "ffdhe3072", 00:23:36.360 "ffdhe4096", 00:23:36.360 "ffdhe6144", 00:23:36.360 "ffdhe8192" 00:23:36.360 ] 00:23:36.360 } 00:23:36.360 }, 00:23:36.360 { 00:23:36.360 "method": "bdev_nvme_attach_controller", 00:23:36.360 "params": { 00:23:36.360 "name": "nvme0", 00:23:36.360 "trtype": "TCP", 00:23:36.360 "adrfam": "IPv4", 00:23:36.360 "traddr": "10.0.0.2", 00:23:36.360 "trsvcid": "4420", 00:23:36.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.360 "prchk_reftag": false, 00:23:36.360 "prchk_guard": false, 00:23:36.360 "ctrlr_loss_timeout_sec": 0, 00:23:36.360 "reconnect_delay_sec": 0, 00:23:36.360 "fast_io_fail_timeout_sec": 0, 00:23:36.360 "psk": "key0", 00:23:36.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.360 "hdgst": false, 00:23:36.360 "ddgst": false, 00:23:36.360 "multipath": "multipath" 00:23:36.360 } 00:23:36.360 }, 00:23:36.360 { 00:23:36.360 "method": "bdev_nvme_set_hotplug", 00:23:36.360 "params": { 00:23:36.360 "period_us": 100000, 00:23:36.360 "enable": false 00:23:36.360 } 00:23:36.360 }, 00:23:36.360 { 00:23:36.360 "method": "bdev_enable_histogram", 00:23:36.360 "params": { 00:23:36.360 "name": "nvme0n1", 00:23:36.360 "enable": true 00:23:36.360 } 00:23:36.360 }, 00:23:36.360 { 00:23:36.360 "method": "bdev_wait_for_examine" 00:23:36.360 } 00:23:36.360 ] 00:23:36.360 }, 00:23:36.360 { 00:23:36.360 "subsystem": "nbd", 00:23:36.360 "config": [] 00:23:36.360 } 00:23:36.360 ] 00:23:36.360 }' 00:23:36.360 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1048850 00:23:36.360 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1048850 ']' 00:23:36.360 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1048850 00:23:36.360 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:36.360 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.360 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1048850 00:23:36.621 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:36.621 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:36.621 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1048850' 00:23:36.621 killing process with pid 1048850 00:23:36.621 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1048850 00:23:36.621 Received shutdown signal, test time was about 1.000000 seconds 00:23:36.621 00:23:36.621 Latency(us) 00:23:36.621 [2024-11-19T09:51:15.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.621 [2024-11-19T09:51:15.816Z] =================================================================================================================== 00:23:36.621 [2024-11-19T09:51:15.816Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:36.622 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1048850 00:23:36.622 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1048677 00:23:36.622 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1048677 ']' 00:23:36.622 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1048677 00:23:36.622 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:36.622 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.622 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1048677 00:23:36.622 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:36.622 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:36.622 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1048677' 00:23:36.622 killing process with pid 1048677 00:23:36.622 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1048677 00:23:36.622 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1048677 00:23:36.882 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:36.882 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:36.882 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:36.882 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:36.882 "subsystems": [ 00:23:36.882 { 00:23:36.882 "subsystem": "keyring", 00:23:36.882 "config": [ 00:23:36.882 { 00:23:36.882 "method": "keyring_file_add_key", 00:23:36.883 "params": { 00:23:36.883 "name": "key0", 00:23:36.883 "path": "/tmp/tmp.CEomI34Ee9" 00:23:36.883 } 00:23:36.883 } 00:23:36.883 ] 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "subsystem": "iobuf", 00:23:36.883 "config": [ 00:23:36.883 { 00:23:36.883 "method": "iobuf_set_options", 00:23:36.883 "params": { 00:23:36.883 "small_pool_count": 8192, 00:23:36.883 "large_pool_count": 1024, 00:23:36.883 "small_bufsize": 8192, 00:23:36.883 "large_bufsize": 135168, 00:23:36.883 "enable_numa": false 00:23:36.883 } 00:23:36.883 } 00:23:36.883 ] 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "subsystem": "sock", 00:23:36.883 "config": [ 00:23:36.883 { 00:23:36.883 "method": "sock_set_default_impl", 00:23:36.883 "params": { 00:23:36.883 "impl_name": "posix" 00:23:36.883 } 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "method": "sock_impl_set_options", 00:23:36.883 "params": { 00:23:36.883 "impl_name": "ssl", 00:23:36.883 "recv_buf_size": 4096, 00:23:36.883 "send_buf_size": 4096, 00:23:36.883 "enable_recv_pipe": true, 00:23:36.883 "enable_quickack": false, 00:23:36.883 "enable_placement_id": 0, 00:23:36.883 "enable_zerocopy_send_server": true, 00:23:36.883 "enable_zerocopy_send_client": false, 00:23:36.883 "zerocopy_threshold": 0, 00:23:36.883 "tls_version": 0, 00:23:36.883 "enable_ktls": false 00:23:36.883 } 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "method": "sock_impl_set_options", 00:23:36.883 "params": { 00:23:36.883 "impl_name": "posix", 00:23:36.883 "recv_buf_size": 2097152, 00:23:36.883 "send_buf_size": 2097152, 00:23:36.883 "enable_recv_pipe": true, 00:23:36.883 "enable_quickack": false, 00:23:36.883 "enable_placement_id": 0, 00:23:36.883 "enable_zerocopy_send_server": true, 00:23:36.883 "enable_zerocopy_send_client": false, 00:23:36.883 "zerocopy_threshold": 0, 00:23:36.883 "tls_version": 0, 00:23:36.883 "enable_ktls": false 00:23:36.883 } 00:23:36.883 } 00:23:36.883 ] 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "subsystem": "vmd", 00:23:36.883 "config": [] 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "subsystem": "accel", 00:23:36.883 "config": [ 00:23:36.883 { 00:23:36.883 "method": "accel_set_options", 00:23:36.883 "params": { 00:23:36.883 "small_cache_size": 128, 00:23:36.883 "large_cache_size": 16, 00:23:36.883 "task_count": 2048, 00:23:36.883 "sequence_count": 2048, 00:23:36.883 "buf_count": 2048 00:23:36.883 } 00:23:36.883 } 00:23:36.883 ] 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "subsystem": "bdev", 00:23:36.883 "config": [ 00:23:36.883 { 00:23:36.883 "method": "bdev_set_options", 00:23:36.883 "params": { 00:23:36.883 "bdev_io_pool_size": 65535, 00:23:36.883 "bdev_io_cache_size": 256, 00:23:36.883 "bdev_auto_examine": true, 00:23:36.883 "iobuf_small_cache_size": 128, 00:23:36.883 "iobuf_large_cache_size": 16 00:23:36.883 } 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "method": "bdev_raid_set_options", 00:23:36.883 "params": { 00:23:36.883 "process_window_size_kb": 1024, 00:23:36.883 "process_max_bandwidth_mb_sec": 0 00:23:36.883 } 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "method": "bdev_iscsi_set_options", 00:23:36.883 "params": { 00:23:36.883 "timeout_sec": 30 00:23:36.883 } 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "method": "bdev_nvme_set_options", 00:23:36.883 "params": { 00:23:36.883 "action_on_timeout": "none", 00:23:36.883 "timeout_us": 0, 00:23:36.883 "timeout_admin_us": 0, 00:23:36.883 "keep_alive_timeout_ms": 10000, 00:23:36.883 "arbitration_burst": 0, 00:23:36.883 "low_priority_weight": 0, 00:23:36.883 "medium_priority_weight": 0, 00:23:36.883 "high_priority_weight": 0, 00:23:36.883 "nvme_adminq_poll_period_us": 10000, 00:23:36.883 "nvme_ioq_poll_period_us": 0, 00:23:36.883 "io_queue_requests": 0, 00:23:36.883 "delay_cmd_submit": true, 00:23:36.883 "transport_retry_count": 4, 00:23:36.883 "bdev_retry_count": 3, 00:23:36.883 "transport_ack_timeout": 0, 00:23:36.883 "ctrlr_loss_timeout_sec": 0, 00:23:36.883 "reconnect_delay_sec": 0, 00:23:36.883 "fast_io_fail_timeout_sec": 0, 00:23:36.883 "disable_auto_failback": false, 00:23:36.883 "generate_uuids": false, 00:23:36.883 "transport_tos": 0, 00:23:36.883 "nvme_error_stat": false, 00:23:36.883 "rdma_srq_size": 0, 00:23:36.883 "io_path_stat": false, 00:23:36.883 "allow_accel_sequence": false, 00:23:36.883 "rdma_max_cq_size": 0, 00:23:36.883 "rdma_cm_event_timeout_ms": 0, 00:23:36.883 "dhchap_digests": [ 00:23:36.883 "sha256", 00:23:36.883 "sha384", 00:23:36.883 "sha512" 00:23:36.883 ], 00:23:36.883 "dhchap_dhgroups": [ 00:23:36.883 "null", 00:23:36.883 "ffdhe2048", 00:23:36.883 "ffdhe3072", 00:23:36.883 "ffdhe4096", 00:23:36.883 "ffdhe6144", 00:23:36.883 "ffdhe8192" 00:23:36.883 ] 00:23:36.883 } 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "method": "bdev_nvme_set_hotplug", 00:23:36.883 "params": { 00:23:36.883 "period_us": 100000, 00:23:36.883 "enable": false 00:23:36.883 } 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "method": "bdev_malloc_create", 00:23:36.883 "params": { 00:23:36.883 "name": "malloc0", 00:23:36.883 "num_blocks": 8192, 00:23:36.883 "block_size": 4096, 00:23:36.883 "physical_block_size": 4096, 00:23:36.883 "uuid": "3a5d36cb-33dc-4bf5-bf3a-8ad9c7709cc7", 00:23:36.883 "optimal_io_boundary": 0, 00:23:36.883 "md_size": 0, 00:23:36.883 "dif_type": 0, 00:23:36.883 "dif_is_head_of_md": false, 00:23:36.883 "dif_pi_format": 0 00:23:36.883 } 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "method": "bdev_wait_for_examine" 00:23:36.883 } 00:23:36.883 ] 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "subsystem": "nbd", 00:23:36.883 "config": [] 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "subsystem": "scheduler", 00:23:36.883 "config": [ 00:23:36.883 { 00:23:36.883 "method": "framework_set_scheduler", 00:23:36.883 "params": { 00:23:36.883 "name": "static" 00:23:36.883 } 00:23:36.883 } 00:23:36.883 ] 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "subsystem": "nvmf", 00:23:36.883 "config": [ 00:23:36.883 { 00:23:36.883 "method": "nvmf_set_config", 00:23:36.883 "params": { 00:23:36.883 "discovery_filter": "match_any", 00:23:36.883 "admin_cmd_passthru": { 00:23:36.883 "identify_ctrlr": false 00:23:36.883 }, 00:23:36.883 "dhchap_digests": [ 00:23:36.883 "sha256", 00:23:36.883 "sha384", 00:23:36.883 "sha512" 00:23:36.883 ], 00:23:36.883 "dhchap_dhgroups": [ 00:23:36.883 "null", 00:23:36.883 "ffdhe2048", 00:23:36.883 "ffdhe3072", 00:23:36.883 "ffdhe4096", 00:23:36.883 "ffdhe6144", 00:23:36.883 "ffdhe8192" 00:23:36.883 ] 00:23:36.883 } 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "method": "nvmf_set_max_subsystems", 00:23:36.883 "params": { 00:23:36.883 "max_subsystems": 1024 00:23:36.883 } 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "method": "nvmf_set_crdt", 00:23:36.883 "params": { 00:23:36.883 "crdt1": 0, 00:23:36.883 "crdt2": 0, 00:23:36.883 "crdt3": 0 00:23:36.883 } 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "method": "nvmf_create_transport", 00:23:36.883 "params": { 00:23:36.883 "trtype": "TCP", 00:23:36.883 "max_queue_depth": 128, 00:23:36.883 "max_io_qpairs_per_ctrlr": 127, 00:23:36.883 "in_capsule_data_size": 4096, 00:23:36.883 "max_io_size": 131072, 00:23:36.883 "io_unit_size": 131072, 00:23:36.883 "max_aq_depth": 128, 00:23:36.883 "num_shared_buffers": 511, 00:23:36.883 "buf_cache_size": 4294967295, 00:23:36.883 "dif_insert_or_strip": false, 00:23:36.883 "zcopy": false, 00:23:36.883 "c2h_success": false, 00:23:36.883 "sock_priority": 0, 00:23:36.883 "abort_timeout_sec": 1, 00:23:36.883 "ack_timeout": 0, 00:23:36.883 "data_wr_pool_size": 0 00:23:36.883 } 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "method": "nvmf_create_subsystem", 00:23:36.883 "params": { 00:23:36.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.883 "allow_any_host": false, 00:23:36.883 "serial_number": "00000000000000000000", 00:23:36.883 "model_number": "SPDK bdev Controller", 00:23:36.883 "max_namespaces": 32, 00:23:36.883 "min_cntlid": 1, 00:23:36.883 "max_cntlid": 65519, 00:23:36.883 "ana_reporting": false 00:23:36.883 } 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "method": "nvmf_subsystem_add_host", 00:23:36.883 "params": { 00:23:36.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.883 "host": "nqn.2016-06.io.spdk:host1", 00:23:36.883 "psk": "key0" 00:23:36.883 } 00:23:36.883 }, 00:23:36.883 { 00:23:36.883 "method": "nvmf_subsystem_add_ns", 00:23:36.883 "params": { 00:23:36.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.884 "namespace": { 00:23:36.884 "nsid": 1, 00:23:36.884 "bdev_name": "malloc0", 00:23:36.884 "nguid": "3A5D36CB33DC4BF5BF3A8AD9C7709CC7", 00:23:36.884 "uuid": "3a5d36cb-33dc-4bf5-bf3a-8ad9c7709cc7", 00:23:36.884 "no_auto_visible": false 00:23:36.884 } 00:23:36.884 } 00:23:36.884 }, 00:23:36.884 { 00:23:36.884 "method": "nvmf_subsystem_add_listener", 00:23:36.884 "params": { 00:23:36.884 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.884 "listen_address": { 00:23:36.884 "trtype": "TCP", 00:23:36.884 "adrfam": "IPv4", 00:23:36.884 "traddr": "10.0.0.2", 00:23:36.884 "trsvcid": "4420" 00:23:36.884 }, 00:23:36.884 "secure_channel": false, 00:23:36.884 "sock_impl": "ssl" 00:23:36.884 } 00:23:36.884 } 00:23:36.884 ] 00:23:36.884 } 00:23:36.884 ] 00:23:36.884 }' 00:23:36.884 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.884 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1049394 00:23:36.884 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1049394 00:23:36.884 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:36.884 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1049394 ']' 00:23:36.884 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.884 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.884 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.884 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.884 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.884 [2024-11-19 10:51:15.926471] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:23:36.884 [2024-11-19 10:51:15.926548] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.884 [2024-11-19 10:51:16.018398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.884 [2024-11-19 10:51:16.048100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.884 [2024-11-19 10:51:16.048130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.884 [2024-11-19 10:51:16.048135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.884 [2024-11-19 10:51:16.048140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.884 [2024-11-19 10:51:16.048144] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.884 [2024-11-19 10:51:16.048635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.145 [2024-11-19 10:51:16.241779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.145 [2024-11-19 10:51:16.273814] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:37.145 [2024-11-19 10:51:16.273999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.717 10:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.717 10:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:37.717 10:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:37.717 10:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.717 10:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.717 10:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.717 10:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1049738 00:23:37.717 10:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1049738 /var/tmp/bdevperf.sock 00:23:37.717 10:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1049738 ']' 00:23:37.717 10:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.717 10:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.717 10:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.717 10:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:37.717 10:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.717 10:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.717 10:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:37.717 "subsystems": [ 00:23:37.717 { 00:23:37.717 "subsystem": "keyring", 00:23:37.717 "config": [ 00:23:37.717 { 00:23:37.717 "method": "keyring_file_add_key", 00:23:37.717 "params": { 00:23:37.717 "name": "key0", 00:23:37.717 "path": "/tmp/tmp.CEomI34Ee9" 00:23:37.717 } 00:23:37.717 } 00:23:37.717 ] 00:23:37.717 }, 00:23:37.717 { 00:23:37.717 "subsystem": "iobuf", 00:23:37.717 "config": [ 00:23:37.717 { 00:23:37.717 "method": "iobuf_set_options", 00:23:37.717 "params": { 00:23:37.717 "small_pool_count": 8192, 00:23:37.717 "large_pool_count": 1024, 00:23:37.717 "small_bufsize": 8192, 00:23:37.717 "large_bufsize": 135168, 00:23:37.717 "enable_numa": false 00:23:37.717 } 00:23:37.717 } 00:23:37.717 ] 00:23:37.717 }, 00:23:37.717 { 00:23:37.717 "subsystem": "sock", 00:23:37.717 "config": [ 00:23:37.717 { 00:23:37.717 "method": "sock_set_default_impl", 00:23:37.717 "params": { 00:23:37.717 "impl_name": "posix" 00:23:37.717 } 00:23:37.717 }, 00:23:37.717 { 00:23:37.717 "method": "sock_impl_set_options", 00:23:37.717 "params": { 00:23:37.717 "impl_name": "ssl", 00:23:37.717 "recv_buf_size": 4096, 00:23:37.717 "send_buf_size": 4096, 00:23:37.717 "enable_recv_pipe": true, 00:23:37.717 "enable_quickack": false, 00:23:37.717 "enable_placement_id": 0, 00:23:37.717 "enable_zerocopy_send_server": true, 00:23:37.717 "enable_zerocopy_send_client": false, 00:23:37.717 "zerocopy_threshold": 0, 00:23:37.717 "tls_version": 0, 00:23:37.717 "enable_ktls": false 00:23:37.717 } 00:23:37.717 }, 00:23:37.717 { 00:23:37.717 "method": "sock_impl_set_options", 00:23:37.717 "params": { 00:23:37.717 "impl_name": "posix", 00:23:37.717 "recv_buf_size": 2097152, 00:23:37.717 "send_buf_size": 2097152, 00:23:37.717 "enable_recv_pipe": true, 00:23:37.717 "enable_quickack": false, 00:23:37.717 "enable_placement_id": 0, 00:23:37.717 "enable_zerocopy_send_server": true, 00:23:37.717 "enable_zerocopy_send_client": false, 00:23:37.717 "zerocopy_threshold": 0, 00:23:37.717 "tls_version": 0, 00:23:37.717 "enable_ktls": false 00:23:37.717 } 00:23:37.717 } 00:23:37.717 ] 00:23:37.717 }, 00:23:37.717 { 00:23:37.717 "subsystem": "vmd", 00:23:37.717 "config": [] 00:23:37.717 }, 00:23:37.717 { 00:23:37.717 "subsystem": "accel", 00:23:37.717 "config": [ 00:23:37.717 { 00:23:37.717 "method": "accel_set_options", 00:23:37.717 "params": { 00:23:37.717 "small_cache_size": 128, 00:23:37.717 "large_cache_size": 16, 00:23:37.717 "task_count": 2048, 00:23:37.717 "sequence_count": 2048, 00:23:37.717 "buf_count": 2048 00:23:37.717 } 00:23:37.717 } 00:23:37.717 ] 00:23:37.717 }, 00:23:37.717 { 00:23:37.717 "subsystem": "bdev", 00:23:37.717 "config": [ 00:23:37.717 { 00:23:37.717 "method": "bdev_set_options", 00:23:37.717 "params": { 00:23:37.717 "bdev_io_pool_size": 65535, 00:23:37.717 "bdev_io_cache_size": 256, 00:23:37.717 "bdev_auto_examine": true, 00:23:37.717 "iobuf_small_cache_size": 128, 00:23:37.717 "iobuf_large_cache_size": 16 00:23:37.717 } 00:23:37.717 }, 00:23:37.717 { 00:23:37.717 "method": "bdev_raid_set_options", 00:23:37.717 "params": { 00:23:37.717 "process_window_size_kb": 1024, 00:23:37.717 "process_max_bandwidth_mb_sec": 0 00:23:37.717 } 00:23:37.717 }, 00:23:37.717 { 00:23:37.717 "method": "bdev_iscsi_set_options", 00:23:37.717 "params": { 00:23:37.717 "timeout_sec": 30 00:23:37.717 } 00:23:37.717 }, 00:23:37.717 { 00:23:37.717 "method": "bdev_nvme_set_options", 00:23:37.717 "params": { 00:23:37.717 "action_on_timeout": "none", 00:23:37.717 "timeout_us": 0, 00:23:37.717 "timeout_admin_us": 0, 00:23:37.717 "keep_alive_timeout_ms": 10000, 00:23:37.717 "arbitration_burst": 0, 00:23:37.717 "low_priority_weight": 0, 00:23:37.717 "medium_priority_weight": 0, 00:23:37.717 "high_priority_weight": 0, 00:23:37.717 "nvme_adminq_poll_period_us": 10000, 00:23:37.717 "nvme_ioq_poll_period_us": 0, 00:23:37.717 "io_queue_requests": 512, 00:23:37.717 "delay_cmd_submit": true, 00:23:37.717 "transport_retry_count": 4, 00:23:37.717 "bdev_retry_count": 3, 00:23:37.717 "transport_ack_timeout": 0, 00:23:37.717 "ctrlr_loss_timeout_sec": 0, 00:23:37.717 "reconnect_delay_sec": 0, 00:23:37.717 "fast_io_fail_timeout_sec": 0, 00:23:37.717 "disable_auto_failback": false, 00:23:37.717 "generate_uuids": false, 00:23:37.717 "transport_tos": 0, 00:23:37.717 "nvme_error_stat": false, 00:23:37.717 "rdma_srq_size": 0, 00:23:37.717 "io_path_stat": false, 00:23:37.717 "allow_accel_sequence": false, 00:23:37.717 "rdma_max_cq_size": 0, 00:23:37.717 "rdma_cm_event_timeout_ms": 0, 00:23:37.717 "dhchap_digests": [ 00:23:37.717 "sha256", 00:23:37.717 "sha384", 00:23:37.717 "sha512" 00:23:37.717 ], 00:23:37.717 "dhchap_dhgroups": [ 00:23:37.717 "null", 00:23:37.717 "ffdhe2048", 00:23:37.717 "ffdhe3072", 00:23:37.717 "ffdhe4096", 00:23:37.717 "ffdhe6144", 00:23:37.717 "ffdhe8192" 00:23:37.717 ] 00:23:37.717 } 00:23:37.717 }, 00:23:37.717 { 00:23:37.717 "method": "bdev_nvme_attach_controller", 00:23:37.717 "params": { 00:23:37.718 "name": "nvme0", 00:23:37.718 "trtype": "TCP", 00:23:37.718 "adrfam": "IPv4", 00:23:37.718 "traddr": "10.0.0.2", 00:23:37.718 "trsvcid": "4420", 00:23:37.718 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.718 "prchk_reftag": false, 00:23:37.718 "prchk_guard": false, 00:23:37.718 "ctrlr_loss_timeout_sec": 0, 00:23:37.718 "reconnect_delay_sec": 0, 00:23:37.718 "fast_io_fail_timeout_sec": 0, 00:23:37.718 "psk": "key0", 00:23:37.718 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:37.718 "hdgst": false, 00:23:37.718 "ddgst": false, 00:23:37.718 "multipath": "multipath" 00:23:37.718 } 00:23:37.718 }, 00:23:37.718 { 00:23:37.718 "method": "bdev_nvme_set_hotplug", 00:23:37.718 "params": { 00:23:37.718 "period_us": 100000, 00:23:37.718 "enable": false 00:23:37.718 } 00:23:37.718 }, 00:23:37.718 { 00:23:37.718 "method": "bdev_enable_histogram", 00:23:37.718 "params": { 00:23:37.718 "name": "nvme0n1", 00:23:37.718 "enable": true 00:23:37.718 } 00:23:37.718 }, 00:23:37.718 { 00:23:37.718 "method": "bdev_wait_for_examine" 00:23:37.718 } 00:23:37.718 ] 00:23:37.718 }, 00:23:37.718 { 00:23:37.718 "subsystem": "nbd", 00:23:37.718 "config": [] 00:23:37.718 } 00:23:37.718 ] 00:23:37.718 }' 00:23:37.718 [2024-11-19 10:51:16.792272] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:23:37.718 [2024-11-19 10:51:16.792325] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1049738 ] 00:23:37.718 [2024-11-19 10:51:16.878106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.718 [2024-11-19 10:51:16.907940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.979 [2024-11-19 10:51:17.042811] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.550 10:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.550 10:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:38.550 10:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:38.550 10:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:38.811 10:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.811 10:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:38.811 Running I/O for 1 seconds... 00:23:39.752 5273.00 IOPS, 20.60 MiB/s 00:23:39.752 Latency(us) 00:23:39.752 [2024-11-19T09:51:18.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.752 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:39.752 Verification LBA range: start 0x0 length 0x2000 00:23:39.752 nvme0n1 : 1.01 5340.82 20.86 0.00 0.00 23821.89 4696.75 24903.68 00:23:39.752 [2024-11-19T09:51:18.947Z] =================================================================================================================== 00:23:39.752 [2024-11-19T09:51:18.947Z] Total : 5340.82 20.86 0.00 0.00 23821.89 4696.75 24903.68 00:23:39.752 { 00:23:39.752 "results": [ 00:23:39.752 { 00:23:39.752 "job": "nvme0n1", 00:23:39.752 "core_mask": "0x2", 00:23:39.752 "workload": "verify", 00:23:39.752 "status": "finished", 00:23:39.752 "verify_range": { 00:23:39.752 "start": 0, 00:23:39.752 "length": 8192 00:23:39.752 }, 00:23:39.752 "queue_depth": 128, 00:23:39.752 "io_size": 4096, 00:23:39.752 "runtime": 1.011267, 00:23:39.752 "iops": 5340.824925563674, 00:23:39.752 "mibps": 20.8625973654831, 00:23:39.752 "io_failed": 0, 00:23:39.752 "io_timeout": 0, 00:23:39.752 "avg_latency_us": 23821.894118373137, 00:23:39.752 "min_latency_us": 4696.746666666667, 00:23:39.752 "max_latency_us": 24903.68 00:23:39.752 } 00:23:39.752 ], 00:23:39.752 "core_count": 1 00:23:39.752 } 00:23:39.752 10:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:39.752 10:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:39.752 10:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:39.752 10:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:39.752 10:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:39.752 10:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:39.752 10:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:39.752 10:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:39.752 10:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:39.752 10:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:39.752 10:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:39.752 nvmf_trace.0 00:23:40.013 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:40.013 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1049738 00:23:40.013 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1049738 ']' 00:23:40.013 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1049738 00:23:40.013 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:40.013 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.013 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1049738 00:23:40.013 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:40.013 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:40.013 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1049738' 00:23:40.013 killing process with pid 1049738 00:23:40.013 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1049738 00:23:40.013 Received shutdown signal, test time was about 1.000000 seconds 00:23:40.013 00:23:40.013 Latency(us) 00:23:40.013 [2024-11-19T09:51:19.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.013 [2024-11-19T09:51:19.208Z] =================================================================================================================== 00:23:40.013 [2024-11-19T09:51:19.208Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.013 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1049738 00:23:40.013 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:40.013 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:40.013 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:40.013 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:40.013 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:40.013 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:40.013 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:40.013 rmmod nvme_tcp 00:23:40.013 rmmod nvme_fabrics 00:23:40.013 rmmod nvme_keyring 00:23:40.273 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:40.273 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:40.273 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:40.273 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1049394 ']' 00:23:40.273 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1049394 00:23:40.273 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1049394 ']' 00:23:40.273 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1049394 00:23:40.273 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:40.273 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.273 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1049394 00:23:40.273 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:40.274 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:40.274 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1049394' 00:23:40.274 killing process with pid 1049394 00:23:40.274 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1049394 00:23:40.274 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1049394 00:23:40.274 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:40.274 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:40.274 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:40.274 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:40.274 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:40.274 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:40.274 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:40.274 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:40.274 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:40.274 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.274 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.274 10:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.6SYUp7ulQG /tmp/tmp.J6rghn6oyf /tmp/tmp.CEomI34Ee9 00:23:42.821 00:23:42.821 real 1m28.312s 00:23:42.821 user 2m19.492s 00:23:42.821 sys 0m27.550s 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.821 ************************************ 00:23:42.821 END TEST nvmf_tls 00:23:42.821 ************************************ 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:42.821 ************************************ 00:23:42.821 START TEST nvmf_fips 00:23:42.821 ************************************ 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:42.821 * Looking for test storage... 00:23:42.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:42.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.821 --rc genhtml_branch_coverage=1 00:23:42.821 --rc genhtml_function_coverage=1 00:23:42.821 --rc genhtml_legend=1 00:23:42.821 --rc geninfo_all_blocks=1 00:23:42.821 --rc geninfo_unexecuted_blocks=1 00:23:42.821 00:23:42.821 ' 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:42.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.821 --rc genhtml_branch_coverage=1 00:23:42.821 --rc genhtml_function_coverage=1 00:23:42.821 --rc genhtml_legend=1 00:23:42.821 --rc geninfo_all_blocks=1 00:23:42.821 --rc geninfo_unexecuted_blocks=1 00:23:42.821 00:23:42.821 ' 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:42.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.821 --rc genhtml_branch_coverage=1 00:23:42.821 --rc genhtml_function_coverage=1 00:23:42.821 --rc genhtml_legend=1 00:23:42.821 --rc geninfo_all_blocks=1 00:23:42.821 --rc geninfo_unexecuted_blocks=1 00:23:42.821 00:23:42.821 ' 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:42.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.821 --rc genhtml_branch_coverage=1 00:23:42.821 --rc genhtml_function_coverage=1 00:23:42.821 --rc genhtml_legend=1 00:23:42.821 --rc geninfo_all_blocks=1 00:23:42.821 --rc geninfo_unexecuted_blocks=1 00:23:42.821 00:23:42.821 ' 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:42.821 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:42.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:42.822 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:42.823 Error setting digest 00:23:42.823 4042E5C24B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:42.823 4042E5C24B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:42.823 10:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:50.964 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:50.964 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:50.964 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:50.964 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:50.964 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:50.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:23:50.965 00:23:50.965 --- 10.0.0.2 ping statistics --- 00:23:50.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.965 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:50.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:23:50.965 00:23:50.965 --- 10.0.0.1 ping statistics --- 00:23:50.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.965 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1054447 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1054447 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1054447 ']' 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.965 10:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:50.965 [2024-11-19 10:51:29.584088] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:23:50.965 [2024-11-19 10:51:29.584171] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.965 [2024-11-19 10:51:29.684243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.965 [2024-11-19 10:51:29.733627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.965 [2024-11-19 10:51:29.733678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.965 [2024-11-19 10:51:29.733687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.965 [2024-11-19 10:51:29.733700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.965 [2024-11-19 10:51:29.733706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.965 [2024-11-19 10:51:29.734504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.226 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.226 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:51.226 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:51.226 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:51.226 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:51.226 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.226 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:51.226 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:51.226 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:51.226 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.xPS 00:23:51.226 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:51.226 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.xPS 00:23:51.487 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.xPS 00:23:51.487 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.xPS 00:23:51.487 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:51.487 [2024-11-19 10:51:30.588768] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.487 [2024-11-19 10:51:30.604761] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:51.487 [2024-11-19 10:51:30.605046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.487 malloc0 00:23:51.487 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:51.487 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1054656 00:23:51.487 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1054656 /var/tmp/bdevperf.sock 00:23:51.487 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:51.487 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1054656 ']' 00:23:51.487 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.487 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.487 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.488 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.488 10:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:51.748 [2024-11-19 10:51:30.749209] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:23:51.748 [2024-11-19 10:51:30.749287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054656 ] 00:23:51.748 [2024-11-19 10:51:30.830593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.748 [2024-11-19 10:51:30.881801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.690 10:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.690 10:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:52.690 10:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.xPS 00:23:52.690 10:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:52.951 [2024-11-19 10:51:31.917120] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:52.951 TLSTESTn1 00:23:52.951 10:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:52.951 Running I/O for 10 seconds... 00:23:55.277 3658.00 IOPS, 14.29 MiB/s [2024-11-19T09:51:35.414Z] 4302.50 IOPS, 16.81 MiB/s [2024-11-19T09:51:36.357Z] 4626.33 IOPS, 18.07 MiB/s [2024-11-19T09:51:37.298Z] 5014.75 IOPS, 19.59 MiB/s [2024-11-19T09:51:38.240Z] 5135.20 IOPS, 20.06 MiB/s [2024-11-19T09:51:39.182Z] 5209.67 IOPS, 20.35 MiB/s [2024-11-19T09:51:40.564Z] 5140.29 IOPS, 20.08 MiB/s [2024-11-19T09:51:41.506Z] 5276.50 IOPS, 20.61 MiB/s [2024-11-19T09:51:42.447Z] 5258.78 IOPS, 20.54 MiB/s [2024-11-19T09:51:42.447Z] 5304.60 IOPS, 20.72 MiB/s 00:24:03.252 Latency(us) 00:24:03.252 [2024-11-19T09:51:42.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.252 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:03.252 Verification LBA range: start 0x0 length 0x2000 00:24:03.252 TLSTESTn1 : 10.05 5291.02 20.67 0.00 0.00 24119.71 6225.92 48496.64 00:24:03.252 [2024-11-19T09:51:42.447Z] =================================================================================================================== 00:24:03.252 [2024-11-19T09:51:42.447Z] Total : 5291.02 20.67 0.00 0.00 24119.71 6225.92 48496.64 00:24:03.252 { 00:24:03.252 "results": [ 00:24:03.252 { 00:24:03.252 "job": "TLSTESTn1", 00:24:03.252 "core_mask": "0x4", 00:24:03.252 "workload": "verify", 00:24:03.252 "status": "finished", 00:24:03.252 "verify_range": { 00:24:03.252 "start": 0, 00:24:03.252 "length": 8192 00:24:03.252 }, 00:24:03.252 "queue_depth": 128, 00:24:03.252 "io_size": 4096, 00:24:03.252 "runtime": 10.049476, 00:24:03.252 "iops": 5291.022138865748, 00:24:03.252 "mibps": 20.66805522994433, 00:24:03.252 "io_failed": 0, 00:24:03.252 "io_timeout": 0, 00:24:03.252 "avg_latency_us": 24119.705661626424, 00:24:03.252 "min_latency_us": 6225.92, 00:24:03.252 "max_latency_us": 48496.64 00:24:03.252 } 00:24:03.252 ], 00:24:03.252 "core_count": 1 00:24:03.252 } 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:03.252 nvmf_trace.0 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1054656 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1054656 ']' 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1054656 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1054656 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1054656' 00:24:03.252 killing process with pid 1054656 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1054656 00:24:03.252 Received shutdown signal, test time was about 10.000000 seconds 00:24:03.252 00:24:03.252 Latency(us) 00:24:03.252 [2024-11-19T09:51:42.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.252 [2024-11-19T09:51:42.447Z] =================================================================================================================== 00:24:03.252 [2024-11-19T09:51:42.447Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.252 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1054656 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.513 rmmod nvme_tcp 00:24:03.513 rmmod nvme_fabrics 00:24:03.513 rmmod nvme_keyring 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1054447 ']' 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1054447 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1054447 ']' 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1054447 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1054447 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1054447' 00:24:03.513 killing process with pid 1054447 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1054447 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1054447 00:24:03.513 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:03.774 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:03.774 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:03.774 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:03.774 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:03.774 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:03.774 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:03.774 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:03.774 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:03.774 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.774 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.774 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.688 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:05.688 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.xPS 00:24:05.688 00:24:05.688 real 0m23.226s 00:24:05.688 user 0m24.919s 00:24:05.688 sys 0m9.702s 00:24:05.688 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.688 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:05.688 ************************************ 00:24:05.688 END TEST nvmf_fips 00:24:05.688 ************************************ 00:24:05.688 10:51:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:05.688 10:51:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:05.688 10:51:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.688 10:51:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:05.949 ************************************ 00:24:05.949 START TEST nvmf_control_msg_list 00:24:05.949 ************************************ 00:24:05.949 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:05.949 * Looking for test storage... 00:24:05.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:05.949 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:05.949 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:24:05.949 10:51:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:05.949 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:05.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.950 --rc genhtml_branch_coverage=1 00:24:05.950 --rc genhtml_function_coverage=1 00:24:05.950 --rc genhtml_legend=1 00:24:05.950 --rc geninfo_all_blocks=1 00:24:05.950 --rc geninfo_unexecuted_blocks=1 00:24:05.950 00:24:05.950 ' 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:05.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.950 --rc genhtml_branch_coverage=1 00:24:05.950 --rc genhtml_function_coverage=1 00:24:05.950 --rc genhtml_legend=1 00:24:05.950 --rc geninfo_all_blocks=1 00:24:05.950 --rc geninfo_unexecuted_blocks=1 00:24:05.950 00:24:05.950 ' 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:05.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.950 --rc genhtml_branch_coverage=1 00:24:05.950 --rc genhtml_function_coverage=1 00:24:05.950 --rc genhtml_legend=1 00:24:05.950 --rc geninfo_all_blocks=1 00:24:05.950 --rc geninfo_unexecuted_blocks=1 00:24:05.950 00:24:05.950 ' 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:05.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.950 --rc genhtml_branch_coverage=1 00:24:05.950 --rc genhtml_function_coverage=1 00:24:05.950 --rc genhtml_legend=1 00:24:05.950 --rc geninfo_all_blocks=1 00:24:05.950 --rc geninfo_unexecuted_blocks=1 00:24:05.950 00:24:05.950 ' 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:05.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:05.950 10:51:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:14.109 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.109 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:14.109 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:14.109 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:14.109 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:14.109 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:14.109 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:14.109 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:14.109 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:14.109 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:14.109 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:14.109 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:14.109 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:14.109 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:14.109 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:14.109 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.109 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.109 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:14.110 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:14.110 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:14.110 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:14.110 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:14.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:24:14.110 00:24:14.110 --- 10.0.0.2 ping statistics --- 00:24:14.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.110 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:24:14.110 00:24:14.110 --- 10.0.0.1 ping statistics --- 00:24:14.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.110 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:14.110 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1061157 00:24:14.111 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1061157 00:24:14.111 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:14.111 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1061157 ']' 00:24:14.111 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.111 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:14.111 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.111 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:14.111 10:51:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:14.111 [2024-11-19 10:51:52.695541] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:24:14.111 [2024-11-19 10:51:52.695613] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.111 [2024-11-19 10:51:52.796277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.111 [2024-11-19 10:51:52.847871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.111 [2024-11-19 10:51:52.847919] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.111 [2024-11-19 10:51:52.847928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.111 [2024-11-19 10:51:52.847935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.111 [2024-11-19 10:51:52.847942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.111 [2024-11-19 10:51:52.848714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.372 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:14.372 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:14.372 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:14.372 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:14.372 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:14.372 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.372 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:14.372 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:14.372 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:14.372 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.372 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:14.372 [2024-11-19 10:51:53.566601] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.632 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.632 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:14.632 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.632 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:14.632 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.632 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:14.632 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.632 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:14.632 Malloc0 00:24:14.632 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.632 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:14.632 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.632 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:14.632 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.632 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:14.632 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.632 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:14.633 [2024-11-19 10:51:53.620998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.633 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.633 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1061306 00:24:14.633 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:14.633 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1061308 00:24:14.633 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:14.633 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1061310 00:24:14.633 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1061306 00:24:14.633 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:14.633 [2024-11-19 10:51:53.721989] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:14.633 [2024-11-19 10:51:53.722517] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:14.633 [2024-11-19 10:51:53.722868] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:15.576 Initializing NVMe Controllers 00:24:15.576 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:15.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:15.576 Initialization complete. Launching workers. 00:24:15.576 ======================================================== 00:24:15.576 Latency(us) 00:24:15.576 Device Information : IOPS MiB/s Average min max 00:24:15.576 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1458.00 5.70 686.00 291.09 977.23 00:24:15.576 ======================================================== 00:24:15.576 Total : 1458.00 5.70 686.00 291.09 977.23 00:24:15.576 00:24:15.837 Initializing NVMe Controllers 00:24:15.837 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:15.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:15.837 Initialization complete. Launching workers. 00:24:15.837 ======================================================== 00:24:15.837 Latency(us) 00:24:15.837 Device Information : IOPS MiB/s Average min max 00:24:15.837 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1479.00 5.78 676.04 283.15 885.81 00:24:15.837 ======================================================== 00:24:15.837 Total : 1479.00 5.78 676.04 283.15 885.81 00:24:15.837 00:24:15.837 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1061308 00:24:15.837 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1061310 00:24:15.837 Initializing NVMe Controllers 00:24:15.837 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:15.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:15.837 Initialization complete. Launching workers. 00:24:15.837 ======================================================== 00:24:15.837 Latency(us) 00:24:15.838 Device Information : IOPS MiB/s Average min max 00:24:15.838 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1426.00 5.57 701.06 295.75 917.75 00:24:15.838 ======================================================== 00:24:15.838 Total : 1426.00 5.57 701.06 295.75 917.75 00:24:15.838 00:24:15.838 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:15.838 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:15.838 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:15.838 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:15.838 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:15.838 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:15.838 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:15.838 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:15.838 rmmod nvme_tcp 00:24:15.838 rmmod nvme_fabrics 00:24:15.838 rmmod nvme_keyring 00:24:15.838 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:15.838 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:15.838 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:15.838 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1061157 ']' 00:24:15.838 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1061157 00:24:15.838 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1061157 ']' 00:24:15.838 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1061157 00:24:15.838 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:15.838 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:15.838 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1061157 00:24:16.099 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:16.099 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:16.099 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1061157' 00:24:16.099 killing process with pid 1061157 00:24:16.099 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1061157 00:24:16.099 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1061157 00:24:16.099 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:16.099 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:16.099 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:16.099 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:16.099 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:16.099 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:16.099 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:16.099 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:16.099 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:16.099 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.099 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.099 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:18.646 00:24:18.646 real 0m12.401s 00:24:18.646 user 0m7.808s 00:24:18.646 sys 0m6.636s 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:18.646 ************************************ 00:24:18.646 END TEST nvmf_control_msg_list 00:24:18.646 ************************************ 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:18.646 ************************************ 00:24:18.646 START TEST nvmf_wait_for_buf 00:24:18.646 ************************************ 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:18.646 * Looking for test storage... 00:24:18.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.646 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:18.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.646 --rc genhtml_branch_coverage=1 00:24:18.647 --rc genhtml_function_coverage=1 00:24:18.647 --rc genhtml_legend=1 00:24:18.647 --rc geninfo_all_blocks=1 00:24:18.647 --rc geninfo_unexecuted_blocks=1 00:24:18.647 00:24:18.647 ' 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:18.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.647 --rc genhtml_branch_coverage=1 00:24:18.647 --rc genhtml_function_coverage=1 00:24:18.647 --rc genhtml_legend=1 00:24:18.647 --rc geninfo_all_blocks=1 00:24:18.647 --rc geninfo_unexecuted_blocks=1 00:24:18.647 00:24:18.647 ' 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:18.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.647 --rc genhtml_branch_coverage=1 00:24:18.647 --rc genhtml_function_coverage=1 00:24:18.647 --rc genhtml_legend=1 00:24:18.647 --rc geninfo_all_blocks=1 00:24:18.647 --rc geninfo_unexecuted_blocks=1 00:24:18.647 00:24:18.647 ' 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:18.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.647 --rc genhtml_branch_coverage=1 00:24:18.647 --rc genhtml_function_coverage=1 00:24:18.647 --rc genhtml_legend=1 00:24:18.647 --rc geninfo_all_blocks=1 00:24:18.647 --rc geninfo_unexecuted_blocks=1 00:24:18.647 00:24:18.647 ' 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:18.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:18.647 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:26.795 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:26.796 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:26.796 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:26.796 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:26.796 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:26.796 10:52:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.796 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.796 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.796 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:26.796 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:26.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:24:26.796 00:24:26.796 --- 10.0.0.2 ping statistics --- 00:24:26.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.796 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:24:26.796 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:24:26.796 00:24:26.796 --- 10.0.0.1 ping statistics --- 00:24:26.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.796 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:24:26.796 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.796 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:26.796 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:26.796 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.796 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:26.796 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:26.796 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.796 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:26.796 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:26.796 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:26.796 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:26.796 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:26.796 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:26.796 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1065843 00:24:26.797 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1065843 00:24:26.797 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:26.797 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1065843 ']' 00:24:26.797 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.797 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.797 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.797 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.797 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:26.797 [2024-11-19 10:52:05.160703] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:24:26.797 [2024-11-19 10:52:05.160768] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.797 [2024-11-19 10:52:05.258169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.797 [2024-11-19 10:52:05.308205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.797 [2024-11-19 10:52:05.308255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.797 [2024-11-19 10:52:05.308264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.797 [2024-11-19 10:52:05.308271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.797 [2024-11-19 10:52:05.308278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.797 [2024-11-19 10:52:05.309063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.797 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.797 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:26.797 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:26.797 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:26.797 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.059 Malloc0 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.059 [2024-11-19 10:52:06.145269] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:27.059 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.060 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.060 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.060 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:27.060 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.060 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.060 [2024-11-19 10:52:06.181549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.060 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.060 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:27.322 [2024-11-19 10:52:06.289308] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:28.707 Initializing NVMe Controllers 00:24:28.707 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:28.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:28.707 Initialization complete. Launching workers. 00:24:28.707 ======================================================== 00:24:28.707 Latency(us) 00:24:28.707 Device Information : IOPS MiB/s Average min max 00:24:28.707 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32356.31 10002.40 63856.98 00:24:28.707 ======================================================== 00:24:28.707 Total : 129.00 16.12 32356.31 10002.40 63856.98 00:24:28.707 00:24:28.707 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:28.707 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:28.707 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.707 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:28.707 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.707 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:24:28.707 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:24:28.707 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:28.707 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:28.707 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:28.707 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:28.707 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.707 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:28.707 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.707 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.707 rmmod nvme_tcp 00:24:28.707 rmmod nvme_fabrics 00:24:28.707 rmmod nvme_keyring 00:24:28.708 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.968 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:28.968 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:28.968 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1065843 ']' 00:24:28.968 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1065843 00:24:28.968 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1065843 ']' 00:24:28.968 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1065843 00:24:28.968 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:28.968 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.969 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1065843 00:24:28.969 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:28.969 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:28.969 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1065843' 00:24:28.969 killing process with pid 1065843 00:24:28.969 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1065843 00:24:28.969 10:52:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1065843 00:24:28.969 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:28.969 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:28.969 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:28.969 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:28.969 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:28.969 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:28.969 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:28.969 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.969 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.969 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.969 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.969 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.515 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:31.515 00:24:31.515 real 0m12.841s 00:24:31.515 user 0m5.197s 00:24:31.515 sys 0m6.248s 00:24:31.515 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.515 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.515 ************************************ 00:24:31.515 END TEST nvmf_wait_for_buf 00:24:31.515 ************************************ 00:24:31.515 10:52:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:24:31.515 10:52:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:24:31.515 10:52:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:24:31.515 10:52:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:24:31.515 10:52:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.515 10:52:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:39.656 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:39.656 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:39.656 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:39.656 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:39.656 ************************************ 00:24:39.656 START TEST nvmf_perf_adq 00:24:39.656 ************************************ 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:39.656 * Looking for test storage... 00:24:39.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:24:39.656 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:39.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.657 --rc genhtml_branch_coverage=1 00:24:39.657 --rc genhtml_function_coverage=1 00:24:39.657 --rc genhtml_legend=1 00:24:39.657 --rc geninfo_all_blocks=1 00:24:39.657 --rc geninfo_unexecuted_blocks=1 00:24:39.657 00:24:39.657 ' 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:39.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.657 --rc genhtml_branch_coverage=1 00:24:39.657 --rc genhtml_function_coverage=1 00:24:39.657 --rc genhtml_legend=1 00:24:39.657 --rc geninfo_all_blocks=1 00:24:39.657 --rc geninfo_unexecuted_blocks=1 00:24:39.657 00:24:39.657 ' 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:39.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.657 --rc genhtml_branch_coverage=1 00:24:39.657 --rc genhtml_function_coverage=1 00:24:39.657 --rc genhtml_legend=1 00:24:39.657 --rc geninfo_all_blocks=1 00:24:39.657 --rc geninfo_unexecuted_blocks=1 00:24:39.657 00:24:39.657 ' 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:39.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.657 --rc genhtml_branch_coverage=1 00:24:39.657 --rc genhtml_function_coverage=1 00:24:39.657 --rc genhtml_legend=1 00:24:39.657 --rc geninfo_all_blocks=1 00:24:39.657 --rc geninfo_unexecuted_blocks=1 00:24:39.657 00:24:39.657 ' 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:39.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:39.657 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:46.494 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.494 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:46.494 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:46.495 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:46.495 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:46.495 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:46.495 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:24:46.495 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:24:47.067 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:24:49.615 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:54.917 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:54.918 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:54.918 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:54.918 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:54.918 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:54.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:24:54.918 00:24:54.918 --- 10.0.0.2 ping statistics --- 00:24:54.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.918 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:24:54.918 00:24:54.918 --- 10.0.0.1 ping statistics --- 00:24:54.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.918 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1076067 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1076067 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1076067 ']' 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.918 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:54.918 [2024-11-19 10:52:33.764648] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:24:54.918 [2024-11-19 10:52:33.764712] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.918 [2024-11-19 10:52:33.863669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:54.918 [2024-11-19 10:52:33.918182] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.918 [2024-11-19 10:52:33.918237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.918 [2024-11-19 10:52:33.918246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.918 [2024-11-19 10:52:33.918254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.918 [2024-11-19 10:52:33.918260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.918 [2024-11-19 10:52:33.920473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.919 [2024-11-19 10:52:33.920634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.919 [2024-11-19 10:52:33.920767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.919 [2024-11-19 10:52:33.920768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.493 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.493 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:24:55.493 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:55.493 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:55.493 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:55.493 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.493 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:24:55.493 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:55.493 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:55.494 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.494 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:55.494 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.494 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:55.494 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:55.494 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.494 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:55.494 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.756 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:55.756 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.756 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:55.756 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.756 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:55.756 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.756 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:55.756 [2024-11-19 10:52:34.784947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.756 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.756 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:55.756 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.757 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:55.757 Malloc1 00:24:55.757 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.757 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:55.757 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.757 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:55.757 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.757 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:55.757 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.757 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:55.757 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.757 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.757 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.757 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:55.757 [2024-11-19 10:52:34.862358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.757 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.757 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1076161 00:24:55.757 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:24:55.757 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:58.304 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:24:58.304 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.304 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:58.304 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.304 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:24:58.304 "tick_rate": 2400000000, 00:24:58.304 "poll_groups": [ 00:24:58.304 { 00:24:58.304 "name": "nvmf_tgt_poll_group_000", 00:24:58.304 "admin_qpairs": 1, 00:24:58.304 "io_qpairs": 1, 00:24:58.304 "current_admin_qpairs": 1, 00:24:58.304 "current_io_qpairs": 1, 00:24:58.304 "pending_bdev_io": 0, 00:24:58.304 "completed_nvme_io": 15502, 00:24:58.304 "transports": [ 00:24:58.304 { 00:24:58.304 "trtype": "TCP" 00:24:58.304 } 00:24:58.304 ] 00:24:58.304 }, 00:24:58.304 { 00:24:58.304 "name": "nvmf_tgt_poll_group_001", 00:24:58.304 "admin_qpairs": 0, 00:24:58.304 "io_qpairs": 1, 00:24:58.304 "current_admin_qpairs": 0, 00:24:58.304 "current_io_qpairs": 1, 00:24:58.304 "pending_bdev_io": 0, 00:24:58.304 "completed_nvme_io": 16016, 00:24:58.304 "transports": [ 00:24:58.304 { 00:24:58.304 "trtype": "TCP" 00:24:58.304 } 00:24:58.304 ] 00:24:58.304 }, 00:24:58.304 { 00:24:58.304 "name": "nvmf_tgt_poll_group_002", 00:24:58.304 "admin_qpairs": 0, 00:24:58.304 "io_qpairs": 1, 00:24:58.304 "current_admin_qpairs": 0, 00:24:58.304 "current_io_qpairs": 1, 00:24:58.304 "pending_bdev_io": 0, 00:24:58.304 "completed_nvme_io": 16334, 00:24:58.304 "transports": [ 00:24:58.304 { 00:24:58.304 "trtype": "TCP" 00:24:58.304 } 00:24:58.304 ] 00:24:58.304 }, 00:24:58.304 { 00:24:58.304 "name": "nvmf_tgt_poll_group_003", 00:24:58.304 "admin_qpairs": 0, 00:24:58.304 "io_qpairs": 1, 00:24:58.304 "current_admin_qpairs": 0, 00:24:58.304 "current_io_qpairs": 1, 00:24:58.304 "pending_bdev_io": 0, 00:24:58.304 "completed_nvme_io": 15624, 00:24:58.304 "transports": [ 00:24:58.304 { 00:24:58.304 "trtype": "TCP" 00:24:58.304 } 00:24:58.304 ] 00:24:58.304 } 00:24:58.304 ] 00:24:58.304 }' 00:24:58.304 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:58.304 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:24:58.304 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:24:58.304 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:24:58.304 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1076161 00:25:06.442 Initializing NVMe Controllers 00:25:06.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:06.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:06.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:06.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:06.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:06.442 Initialization complete. Launching workers. 00:25:06.442 ======================================================== 00:25:06.442 Latency(us) 00:25:06.442 Device Information : IOPS MiB/s Average min max 00:25:06.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12144.00 47.44 5285.73 1262.73 45462.99 00:25:06.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13008.60 50.81 4920.11 1354.58 13163.54 00:25:06.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13503.20 52.75 4739.47 1370.00 13402.59 00:25:06.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12154.00 47.48 5265.00 1134.68 11887.56 00:25:06.442 ======================================================== 00:25:06.442 Total : 50809.80 198.48 5041.99 1134.68 45462.99 00:25:06.442 00:25:06.442 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:25:06.442 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:06.442 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:25:06.442 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:06.442 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:25:06.442 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:06.442 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:06.442 rmmod nvme_tcp 00:25:06.442 rmmod nvme_fabrics 00:25:06.442 rmmod nvme_keyring 00:25:06.442 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:06.442 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:25:06.442 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:25:06.442 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1076067 ']' 00:25:06.442 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1076067 00:25:06.442 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1076067 ']' 00:25:06.442 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1076067 00:25:06.442 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:25:06.442 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.443 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1076067 00:25:06.443 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:06.443 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:06.443 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1076067' 00:25:06.443 killing process with pid 1076067 00:25:06.443 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1076067 00:25:06.443 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1076067 00:25:06.443 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:06.443 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:06.443 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:06.443 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:25:06.443 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:25:06.443 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:06.443 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:25:06.443 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:06.443 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:06.443 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.443 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.443 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.356 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:08.356 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:25:08.356 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:25:08.356 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:25:09.738 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:12.281 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:17.577 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:17.578 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:17.578 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:17.578 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:17.578 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.578 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.578 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.578 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.578 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:17.578 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.578 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.578 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.578 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:17.578 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:17.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:25:17.578 00:25:17.578 --- 10.0.0.2 ping statistics --- 00:25:17.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.578 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:25:17.578 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:25:17.578 00:25:17.578 --- 10.0.0.1 ping statistics --- 00:25:17.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.578 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:25:17.578 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.578 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:25:17.578 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:17.578 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.578 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:17.578 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:17.578 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.578 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:17.578 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:25:17.579 net.core.busy_poll = 1 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:25:17.579 net.core.busy_read = 1 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1080805 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1080805 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1080805 ']' 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.579 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:17.579 [2024-11-19 10:52:56.591656] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:25:17.579 [2024-11-19 10:52:56.591724] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.579 [2024-11-19 10:52:56.691361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:17.579 [2024-11-19 10:52:56.744534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.579 [2024-11-19 10:52:56.744586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.579 [2024-11-19 10:52:56.744596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.579 [2024-11-19 10:52:56.744603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.579 [2024-11-19 10:52:56.744609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.579 [2024-11-19 10:52:56.748196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.579 [2024-11-19 10:52:56.748525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:17.579 [2024-11-19 10:52:56.748686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:17.579 [2024-11-19 10:52:56.748688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.523 [2024-11-19 10:52:57.608374] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.523 Malloc1 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.523 [2024-11-19 10:52:57.688696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1080955 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:25:18.523 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:21.070 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:25:21.070 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.070 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:21.070 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.070 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:25:21.070 "tick_rate": 2400000000, 00:25:21.070 "poll_groups": [ 00:25:21.070 { 00:25:21.070 "name": "nvmf_tgt_poll_group_000", 00:25:21.070 "admin_qpairs": 1, 00:25:21.070 "io_qpairs": 4, 00:25:21.070 "current_admin_qpairs": 1, 00:25:21.070 "current_io_qpairs": 4, 00:25:21.070 "pending_bdev_io": 0, 00:25:21.070 "completed_nvme_io": 38302, 00:25:21.070 "transports": [ 00:25:21.070 { 00:25:21.070 "trtype": "TCP" 00:25:21.070 } 00:25:21.070 ] 00:25:21.070 }, 00:25:21.070 { 00:25:21.070 "name": "nvmf_tgt_poll_group_001", 00:25:21.070 "admin_qpairs": 0, 00:25:21.070 "io_qpairs": 0, 00:25:21.070 "current_admin_qpairs": 0, 00:25:21.070 "current_io_qpairs": 0, 00:25:21.070 "pending_bdev_io": 0, 00:25:21.070 "completed_nvme_io": 0, 00:25:21.070 "transports": [ 00:25:21.070 { 00:25:21.070 "trtype": "TCP" 00:25:21.070 } 00:25:21.070 ] 00:25:21.070 }, 00:25:21.070 { 00:25:21.070 "name": "nvmf_tgt_poll_group_002", 00:25:21.070 "admin_qpairs": 0, 00:25:21.070 "io_qpairs": 0, 00:25:21.070 "current_admin_qpairs": 0, 00:25:21.070 "current_io_qpairs": 0, 00:25:21.070 "pending_bdev_io": 0, 00:25:21.070 "completed_nvme_io": 0, 00:25:21.070 "transports": [ 00:25:21.070 { 00:25:21.070 "trtype": "TCP" 00:25:21.070 } 00:25:21.070 ] 00:25:21.070 }, 00:25:21.070 { 00:25:21.070 "name": "nvmf_tgt_poll_group_003", 00:25:21.070 "admin_qpairs": 0, 00:25:21.070 "io_qpairs": 0, 00:25:21.070 "current_admin_qpairs": 0, 00:25:21.070 "current_io_qpairs": 0, 00:25:21.070 "pending_bdev_io": 0, 00:25:21.070 "completed_nvme_io": 0, 00:25:21.070 "transports": [ 00:25:21.070 { 00:25:21.070 "trtype": "TCP" 00:25:21.070 } 00:25:21.070 ] 00:25:21.070 } 00:25:21.070 ] 00:25:21.070 }' 00:25:21.070 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:25:21.070 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:25:21.070 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:25:21.070 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:25:21.070 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1080955 00:25:29.213 Initializing NVMe Controllers 00:25:29.213 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:29.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:29.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:29.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:29.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:29.213 Initialization complete. Launching workers. 00:25:29.213 ======================================================== 00:25:29.213 Latency(us) 00:25:29.213 Device Information : IOPS MiB/s Average min max 00:25:29.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7139.90 27.89 8963.74 1282.51 55826.36 00:25:29.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7247.20 28.31 8831.60 1278.87 58767.06 00:25:29.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4678.00 18.27 13682.70 1407.91 60637.95 00:25:29.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6737.00 26.32 9504.22 1300.73 57292.20 00:25:29.213 ======================================================== 00:25:29.213 Total : 25802.09 100.79 9923.31 1278.87 60637.95 00:25:29.213 00:25:29.213 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:25:29.213 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:29.213 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:25:29.213 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:29.213 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:25:29.213 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:29.213 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:29.213 rmmod nvme_tcp 00:25:29.213 rmmod nvme_fabrics 00:25:29.213 rmmod nvme_keyring 00:25:29.213 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:29.213 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:25:29.213 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:25:29.213 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1080805 ']' 00:25:29.213 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1080805 00:25:29.213 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1080805 ']' 00:25:29.213 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1080805 00:25:29.213 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:25:29.213 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:29.213 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1080805 00:25:29.213 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:29.213 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:29.213 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1080805' 00:25:29.213 killing process with pid 1080805 00:25:29.213 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1080805 00:25:29.213 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1080805 00:25:29.213 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:29.213 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:29.213 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:29.213 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:25:29.213 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:25:29.213 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:29.213 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:25:29.213 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:29.213 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:29.213 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.213 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.213 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:25:32.515 00:25:32.515 real 0m53.822s 00:25:32.515 user 2m50.243s 00:25:32.515 sys 0m11.219s 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:32.515 ************************************ 00:25:32.515 END TEST nvmf_perf_adq 00:25:32.515 ************************************ 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:32.515 ************************************ 00:25:32.515 START TEST nvmf_shutdown 00:25:32.515 ************************************ 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:32.515 * Looking for test storage... 00:25:32.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:32.515 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:32.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.515 --rc genhtml_branch_coverage=1 00:25:32.515 --rc genhtml_function_coverage=1 00:25:32.515 --rc genhtml_legend=1 00:25:32.515 --rc geninfo_all_blocks=1 00:25:32.515 --rc geninfo_unexecuted_blocks=1 00:25:32.516 00:25:32.516 ' 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:32.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.516 --rc genhtml_branch_coverage=1 00:25:32.516 --rc genhtml_function_coverage=1 00:25:32.516 --rc genhtml_legend=1 00:25:32.516 --rc geninfo_all_blocks=1 00:25:32.516 --rc geninfo_unexecuted_blocks=1 00:25:32.516 00:25:32.516 ' 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:32.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.516 --rc genhtml_branch_coverage=1 00:25:32.516 --rc genhtml_function_coverage=1 00:25:32.516 --rc genhtml_legend=1 00:25:32.516 --rc geninfo_all_blocks=1 00:25:32.516 --rc geninfo_unexecuted_blocks=1 00:25:32.516 00:25:32.516 ' 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:32.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.516 --rc genhtml_branch_coverage=1 00:25:32.516 --rc genhtml_function_coverage=1 00:25:32.516 --rc genhtml_legend=1 00:25:32.516 --rc geninfo_all_blocks=1 00:25:32.516 --rc geninfo_unexecuted_blocks=1 00:25:32.516 00:25:32.516 ' 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:32.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:32.516 ************************************ 00:25:32.516 START TEST nvmf_shutdown_tc1 00:25:32.516 ************************************ 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:32.516 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.664 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:40.665 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:40.665 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:40.665 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:40.665 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:40.665 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:40.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:25:40.665 00:25:40.665 --- 10.0.0.2 ping statistics --- 00:25:40.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.665 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:40.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:25:40.665 00:25:40.665 --- 10.0.0.1 ping statistics --- 00:25:40.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.665 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1087427 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1087427 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1087427 ']' 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.665 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:40.666 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.666 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:40.666 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:40.666 [2024-11-19 10:53:19.169866] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:25:40.666 [2024-11-19 10:53:19.169931] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.666 [2024-11-19 10:53:19.273525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:40.666 [2024-11-19 10:53:19.325720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.666 [2024-11-19 10:53:19.325776] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.666 [2024-11-19 10:53:19.325784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.666 [2024-11-19 10:53:19.325792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.666 [2024-11-19 10:53:19.325799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.666 [2024-11-19 10:53:19.328190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:40.666 [2024-11-19 10:53:19.328227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:40.666 [2024-11-19 10:53:19.328399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:40.666 [2024-11-19 10:53:19.328400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.927 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:40.927 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:25:40.927 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:40.927 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:40.927 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:40.927 [2024-11-19 10:53:20.048826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.927 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:41.188 Malloc1 00:25:41.188 [2024-11-19 10:53:20.179769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.188 Malloc2 00:25:41.188 Malloc3 00:25:41.188 Malloc4 00:25:41.188 Malloc5 00:25:41.450 Malloc6 00:25:41.450 Malloc7 00:25:41.450 Malloc8 00:25:41.450 Malloc9 00:25:41.450 Malloc10 00:25:41.450 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.450 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:41.450 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:41.450 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:41.450 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1087794 00:25:41.450 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1087794 /var/tmp/bdevperf.sock 00:25:41.450 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1087794 ']' 00:25:41.450 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:41.450 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:41.450 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:41.450 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:41.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:41.450 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:41.450 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:41.450 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:41.450 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:25:41.450 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:25:41.711 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.712 { 00:25:41.712 "params": { 00:25:41.712 "name": "Nvme$subsystem", 00:25:41.712 "trtype": "$TEST_TRANSPORT", 00:25:41.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.712 "adrfam": "ipv4", 00:25:41.712 "trsvcid": "$NVMF_PORT", 00:25:41.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.712 "hdgst": ${hdgst:-false}, 00:25:41.712 "ddgst": ${ddgst:-false} 00:25:41.712 }, 00:25:41.712 "method": "bdev_nvme_attach_controller" 00:25:41.712 } 00:25:41.712 EOF 00:25:41.712 )") 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.712 { 00:25:41.712 "params": { 00:25:41.712 "name": "Nvme$subsystem", 00:25:41.712 "trtype": "$TEST_TRANSPORT", 00:25:41.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.712 "adrfam": "ipv4", 00:25:41.712 "trsvcid": "$NVMF_PORT", 00:25:41.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.712 "hdgst": ${hdgst:-false}, 00:25:41.712 "ddgst": ${ddgst:-false} 00:25:41.712 }, 00:25:41.712 "method": "bdev_nvme_attach_controller" 00:25:41.712 } 00:25:41.712 EOF 00:25:41.712 )") 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.712 { 00:25:41.712 "params": { 00:25:41.712 "name": "Nvme$subsystem", 00:25:41.712 "trtype": "$TEST_TRANSPORT", 00:25:41.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.712 "adrfam": "ipv4", 00:25:41.712 "trsvcid": "$NVMF_PORT", 00:25:41.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.712 "hdgst": ${hdgst:-false}, 00:25:41.712 "ddgst": ${ddgst:-false} 00:25:41.712 }, 00:25:41.712 "method": "bdev_nvme_attach_controller" 00:25:41.712 } 00:25:41.712 EOF 00:25:41.712 )") 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.712 { 00:25:41.712 "params": { 00:25:41.712 "name": "Nvme$subsystem", 00:25:41.712 "trtype": "$TEST_TRANSPORT", 00:25:41.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.712 "adrfam": "ipv4", 00:25:41.712 "trsvcid": "$NVMF_PORT", 00:25:41.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.712 "hdgst": ${hdgst:-false}, 00:25:41.712 "ddgst": ${ddgst:-false} 00:25:41.712 }, 00:25:41.712 "method": "bdev_nvme_attach_controller" 00:25:41.712 } 00:25:41.712 EOF 00:25:41.712 )") 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.712 { 00:25:41.712 "params": { 00:25:41.712 "name": "Nvme$subsystem", 00:25:41.712 "trtype": "$TEST_TRANSPORT", 00:25:41.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.712 "adrfam": "ipv4", 00:25:41.712 "trsvcid": "$NVMF_PORT", 00:25:41.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.712 "hdgst": ${hdgst:-false}, 00:25:41.712 "ddgst": ${ddgst:-false} 00:25:41.712 }, 00:25:41.712 "method": "bdev_nvme_attach_controller" 00:25:41.712 } 00:25:41.712 EOF 00:25:41.712 )") 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.712 { 00:25:41.712 "params": { 00:25:41.712 "name": "Nvme$subsystem", 00:25:41.712 "trtype": "$TEST_TRANSPORT", 00:25:41.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.712 "adrfam": "ipv4", 00:25:41.712 "trsvcid": "$NVMF_PORT", 00:25:41.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.712 "hdgst": ${hdgst:-false}, 00:25:41.712 "ddgst": ${ddgst:-false} 00:25:41.712 }, 00:25:41.712 "method": "bdev_nvme_attach_controller" 00:25:41.712 } 00:25:41.712 EOF 00:25:41.712 )") 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.712 [2024-11-19 10:53:20.694971] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:25:41.712 [2024-11-19 10:53:20.695034] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.712 { 00:25:41.712 "params": { 00:25:41.712 "name": "Nvme$subsystem", 00:25:41.712 "trtype": "$TEST_TRANSPORT", 00:25:41.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.712 "adrfam": "ipv4", 00:25:41.712 "trsvcid": "$NVMF_PORT", 00:25:41.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.712 "hdgst": ${hdgst:-false}, 00:25:41.712 "ddgst": ${ddgst:-false} 00:25:41.712 }, 00:25:41.712 "method": "bdev_nvme_attach_controller" 00:25:41.712 } 00:25:41.712 EOF 00:25:41.712 )") 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.712 { 00:25:41.712 "params": { 00:25:41.712 "name": "Nvme$subsystem", 00:25:41.712 "trtype": "$TEST_TRANSPORT", 00:25:41.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.712 "adrfam": "ipv4", 00:25:41.712 "trsvcid": "$NVMF_PORT", 00:25:41.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.712 "hdgst": ${hdgst:-false}, 00:25:41.712 "ddgst": ${ddgst:-false} 00:25:41.712 }, 00:25:41.712 "method": "bdev_nvme_attach_controller" 00:25:41.712 } 00:25:41.712 EOF 00:25:41.712 )") 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.712 { 00:25:41.712 "params": { 00:25:41.712 "name": "Nvme$subsystem", 00:25:41.712 "trtype": "$TEST_TRANSPORT", 00:25:41.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.712 "adrfam": "ipv4", 00:25:41.712 "trsvcid": "$NVMF_PORT", 00:25:41.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.712 "hdgst": ${hdgst:-false}, 00:25:41.712 "ddgst": ${ddgst:-false} 00:25:41.712 }, 00:25:41.712 "method": "bdev_nvme_attach_controller" 00:25:41.712 } 00:25:41.712 EOF 00:25:41.712 )") 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.712 { 00:25:41.712 "params": { 00:25:41.712 "name": "Nvme$subsystem", 00:25:41.712 "trtype": "$TEST_TRANSPORT", 00:25:41.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.712 "adrfam": "ipv4", 00:25:41.712 "trsvcid": "$NVMF_PORT", 00:25:41.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.712 "hdgst": ${hdgst:-false}, 00:25:41.712 "ddgst": ${ddgst:-false} 00:25:41.712 }, 00:25:41.712 "method": "bdev_nvme_attach_controller" 00:25:41.712 } 00:25:41.712 EOF 00:25:41.712 )") 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:25:41.712 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:41.713 "params": { 00:25:41.713 "name": "Nvme1", 00:25:41.713 "trtype": "tcp", 00:25:41.713 "traddr": "10.0.0.2", 00:25:41.713 "adrfam": "ipv4", 00:25:41.713 "trsvcid": "4420", 00:25:41.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:41.713 "hdgst": false, 00:25:41.713 "ddgst": false 00:25:41.713 }, 00:25:41.713 "method": "bdev_nvme_attach_controller" 00:25:41.713 },{ 00:25:41.713 "params": { 00:25:41.713 "name": "Nvme2", 00:25:41.713 "trtype": "tcp", 00:25:41.713 "traddr": "10.0.0.2", 00:25:41.713 "adrfam": "ipv4", 00:25:41.713 "trsvcid": "4420", 00:25:41.713 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:41.713 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:41.713 "hdgst": false, 00:25:41.713 "ddgst": false 00:25:41.713 }, 00:25:41.713 "method": "bdev_nvme_attach_controller" 00:25:41.713 },{ 00:25:41.713 "params": { 00:25:41.713 "name": "Nvme3", 00:25:41.713 "trtype": "tcp", 00:25:41.713 "traddr": "10.0.0.2", 00:25:41.713 "adrfam": "ipv4", 00:25:41.713 "trsvcid": "4420", 00:25:41.713 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:41.713 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:41.713 "hdgst": false, 00:25:41.713 "ddgst": false 00:25:41.713 }, 00:25:41.713 "method": "bdev_nvme_attach_controller" 00:25:41.713 },{ 00:25:41.713 "params": { 00:25:41.713 "name": "Nvme4", 00:25:41.713 "trtype": "tcp", 00:25:41.713 "traddr": "10.0.0.2", 00:25:41.713 "adrfam": "ipv4", 00:25:41.713 "trsvcid": "4420", 00:25:41.713 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:41.713 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:41.713 "hdgst": false, 00:25:41.713 "ddgst": false 00:25:41.713 }, 00:25:41.713 "method": "bdev_nvme_attach_controller" 00:25:41.713 },{ 00:25:41.713 "params": { 00:25:41.713 "name": "Nvme5", 00:25:41.713 "trtype": "tcp", 00:25:41.713 "traddr": "10.0.0.2", 00:25:41.713 "adrfam": "ipv4", 00:25:41.713 "trsvcid": "4420", 00:25:41.713 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:41.713 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:41.713 "hdgst": false, 00:25:41.713 "ddgst": false 00:25:41.713 }, 00:25:41.713 "method": "bdev_nvme_attach_controller" 00:25:41.713 },{ 00:25:41.713 "params": { 00:25:41.713 "name": "Nvme6", 00:25:41.713 "trtype": "tcp", 00:25:41.713 "traddr": "10.0.0.2", 00:25:41.713 "adrfam": "ipv4", 00:25:41.713 "trsvcid": "4420", 00:25:41.713 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:41.713 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:41.713 "hdgst": false, 00:25:41.713 "ddgst": false 00:25:41.713 }, 00:25:41.713 "method": "bdev_nvme_attach_controller" 00:25:41.713 },{ 00:25:41.713 "params": { 00:25:41.713 "name": "Nvme7", 00:25:41.713 "trtype": "tcp", 00:25:41.713 "traddr": "10.0.0.2", 00:25:41.713 "adrfam": "ipv4", 00:25:41.713 "trsvcid": "4420", 00:25:41.713 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:41.713 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:41.713 "hdgst": false, 00:25:41.713 "ddgst": false 00:25:41.713 }, 00:25:41.713 "method": "bdev_nvme_attach_controller" 00:25:41.713 },{ 00:25:41.713 "params": { 00:25:41.713 "name": "Nvme8", 00:25:41.713 "trtype": "tcp", 00:25:41.713 "traddr": "10.0.0.2", 00:25:41.713 "adrfam": "ipv4", 00:25:41.713 "trsvcid": "4420", 00:25:41.713 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:41.713 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:41.713 "hdgst": false, 00:25:41.713 "ddgst": false 00:25:41.713 }, 00:25:41.713 "method": "bdev_nvme_attach_controller" 00:25:41.713 },{ 00:25:41.713 "params": { 00:25:41.713 "name": "Nvme9", 00:25:41.713 "trtype": "tcp", 00:25:41.713 "traddr": "10.0.0.2", 00:25:41.713 "adrfam": "ipv4", 00:25:41.713 "trsvcid": "4420", 00:25:41.713 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:41.713 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:41.713 "hdgst": false, 00:25:41.713 "ddgst": false 00:25:41.713 }, 00:25:41.713 "method": "bdev_nvme_attach_controller" 00:25:41.713 },{ 00:25:41.713 "params": { 00:25:41.713 "name": "Nvme10", 00:25:41.713 "trtype": "tcp", 00:25:41.713 "traddr": "10.0.0.2", 00:25:41.713 "adrfam": "ipv4", 00:25:41.713 "trsvcid": "4420", 00:25:41.713 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:41.713 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:41.713 "hdgst": false, 00:25:41.713 "ddgst": false 00:25:41.713 }, 00:25:41.713 "method": "bdev_nvme_attach_controller" 00:25:41.713 }' 00:25:41.713 [2024-11-19 10:53:20.791987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.713 [2024-11-19 10:53:20.845291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.100 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:43.100 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:25:43.100 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:43.100 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.100 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:43.100 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.100 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1087794 00:25:43.100 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:25:43.100 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:25:44.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1087794 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:44.042 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1087427 00:25:44.042 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:44.042 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:44.042 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:25:44.042 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:25:44.042 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:44.042 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:44.042 { 00:25:44.042 "params": { 00:25:44.042 "name": "Nvme$subsystem", 00:25:44.042 "trtype": "$TEST_TRANSPORT", 00:25:44.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.042 "adrfam": "ipv4", 00:25:44.042 "trsvcid": "$NVMF_PORT", 00:25:44.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.042 "hdgst": ${hdgst:-false}, 00:25:44.042 "ddgst": ${ddgst:-false} 00:25:44.042 }, 00:25:44.042 "method": "bdev_nvme_attach_controller" 00:25:44.042 } 00:25:44.042 EOF 00:25:44.043 )") 00:25:44.043 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:44.043 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:44.043 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:44.043 { 00:25:44.043 "params": { 00:25:44.043 "name": "Nvme$subsystem", 00:25:44.043 "trtype": "$TEST_TRANSPORT", 00:25:44.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.043 "adrfam": "ipv4", 00:25:44.043 "trsvcid": "$NVMF_PORT", 00:25:44.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.043 "hdgst": ${hdgst:-false}, 00:25:44.043 "ddgst": ${ddgst:-false} 00:25:44.043 }, 00:25:44.043 "method": "bdev_nvme_attach_controller" 00:25:44.043 } 00:25:44.043 EOF 00:25:44.043 )") 00:25:44.043 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:44.043 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:44.043 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:44.043 { 00:25:44.043 "params": { 00:25:44.043 "name": "Nvme$subsystem", 00:25:44.043 "trtype": "$TEST_TRANSPORT", 00:25:44.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.043 "adrfam": "ipv4", 00:25:44.043 "trsvcid": "$NVMF_PORT", 00:25:44.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.043 "hdgst": ${hdgst:-false}, 00:25:44.043 "ddgst": ${ddgst:-false} 00:25:44.043 }, 00:25:44.043 "method": "bdev_nvme_attach_controller" 00:25:44.043 } 00:25:44.043 EOF 00:25:44.043 )") 00:25:44.043 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:44.043 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:44.043 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:44.043 { 00:25:44.043 "params": { 00:25:44.043 "name": "Nvme$subsystem", 00:25:44.043 "trtype": "$TEST_TRANSPORT", 00:25:44.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.043 "adrfam": "ipv4", 00:25:44.043 "trsvcid": "$NVMF_PORT", 00:25:44.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.043 "hdgst": ${hdgst:-false}, 00:25:44.043 "ddgst": ${ddgst:-false} 00:25:44.043 }, 00:25:44.043 "method": "bdev_nvme_attach_controller" 00:25:44.043 } 00:25:44.043 EOF 00:25:44.043 )") 00:25:44.043 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:44.043 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:44.043 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:44.043 { 00:25:44.043 "params": { 00:25:44.043 "name": "Nvme$subsystem", 00:25:44.043 "trtype": "$TEST_TRANSPORT", 00:25:44.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.043 "adrfam": "ipv4", 00:25:44.043 "trsvcid": "$NVMF_PORT", 00:25:44.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.043 "hdgst": ${hdgst:-false}, 00:25:44.043 "ddgst": ${ddgst:-false} 00:25:44.043 }, 00:25:44.043 "method": "bdev_nvme_attach_controller" 00:25:44.043 } 00:25:44.043 EOF 00:25:44.043 )") 00:25:44.043 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:44.043 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:44.043 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:44.043 { 00:25:44.043 "params": { 00:25:44.043 "name": "Nvme$subsystem", 00:25:44.043 "trtype": "$TEST_TRANSPORT", 00:25:44.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.043 "adrfam": "ipv4", 00:25:44.043 "trsvcid": "$NVMF_PORT", 00:25:44.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.043 "hdgst": ${hdgst:-false}, 00:25:44.043 "ddgst": ${ddgst:-false} 00:25:44.043 }, 00:25:44.043 "method": "bdev_nvme_attach_controller" 00:25:44.043 } 00:25:44.043 EOF 00:25:44.043 )") 00:25:44.043 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:44.305 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:44.305 [2024-11-19 10:53:23.239276] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:25:44.305 [2024-11-19 10:53:23.239330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088385 ] 00:25:44.305 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:44.305 { 00:25:44.305 "params": { 00:25:44.305 "name": "Nvme$subsystem", 00:25:44.305 "trtype": "$TEST_TRANSPORT", 00:25:44.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.305 "adrfam": "ipv4", 00:25:44.305 "trsvcid": "$NVMF_PORT", 00:25:44.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.305 "hdgst": ${hdgst:-false}, 00:25:44.305 "ddgst": ${ddgst:-false} 00:25:44.305 }, 00:25:44.305 "method": "bdev_nvme_attach_controller" 00:25:44.305 } 00:25:44.305 EOF 00:25:44.305 )") 00:25:44.305 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:44.305 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:44.305 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:44.305 { 00:25:44.305 "params": { 00:25:44.305 "name": "Nvme$subsystem", 00:25:44.305 "trtype": "$TEST_TRANSPORT", 00:25:44.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.305 "adrfam": "ipv4", 00:25:44.305 "trsvcid": "$NVMF_PORT", 00:25:44.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.305 "hdgst": ${hdgst:-false}, 00:25:44.305 "ddgst": ${ddgst:-false} 00:25:44.305 }, 00:25:44.305 "method": "bdev_nvme_attach_controller" 00:25:44.305 } 00:25:44.305 EOF 00:25:44.305 )") 00:25:44.305 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:44.305 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:44.305 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:44.305 { 00:25:44.305 "params": { 00:25:44.305 "name": "Nvme$subsystem", 00:25:44.305 "trtype": "$TEST_TRANSPORT", 00:25:44.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.305 "adrfam": "ipv4", 00:25:44.305 "trsvcid": "$NVMF_PORT", 00:25:44.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.305 "hdgst": ${hdgst:-false}, 00:25:44.305 "ddgst": ${ddgst:-false} 00:25:44.305 }, 00:25:44.305 "method": "bdev_nvme_attach_controller" 00:25:44.305 } 00:25:44.305 EOF 00:25:44.305 )") 00:25:44.305 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:44.305 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:44.305 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:44.305 { 00:25:44.305 "params": { 00:25:44.305 "name": "Nvme$subsystem", 00:25:44.305 "trtype": "$TEST_TRANSPORT", 00:25:44.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.305 "adrfam": "ipv4", 00:25:44.305 "trsvcid": "$NVMF_PORT", 00:25:44.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.305 "hdgst": ${hdgst:-false}, 00:25:44.305 "ddgst": ${ddgst:-false} 00:25:44.305 }, 00:25:44.305 "method": "bdev_nvme_attach_controller" 00:25:44.306 } 00:25:44.306 EOF 00:25:44.306 )") 00:25:44.306 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:44.306 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:25:44.306 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:25:44.306 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:44.306 "params": { 00:25:44.306 "name": "Nvme1", 00:25:44.306 "trtype": "tcp", 00:25:44.306 "traddr": "10.0.0.2", 00:25:44.306 "adrfam": "ipv4", 00:25:44.306 "trsvcid": "4420", 00:25:44.306 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:44.306 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:44.306 "hdgst": false, 00:25:44.306 "ddgst": false 00:25:44.306 }, 00:25:44.306 "method": "bdev_nvme_attach_controller" 00:25:44.306 },{ 00:25:44.306 "params": { 00:25:44.306 "name": "Nvme2", 00:25:44.306 "trtype": "tcp", 00:25:44.306 "traddr": "10.0.0.2", 00:25:44.306 "adrfam": "ipv4", 00:25:44.306 "trsvcid": "4420", 00:25:44.306 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:44.306 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:44.306 "hdgst": false, 00:25:44.306 "ddgst": false 00:25:44.306 }, 00:25:44.306 "method": "bdev_nvme_attach_controller" 00:25:44.306 },{ 00:25:44.306 "params": { 00:25:44.306 "name": "Nvme3", 00:25:44.306 "trtype": "tcp", 00:25:44.306 "traddr": "10.0.0.2", 00:25:44.306 "adrfam": "ipv4", 00:25:44.306 "trsvcid": "4420", 00:25:44.306 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:44.306 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:44.306 "hdgst": false, 00:25:44.306 "ddgst": false 00:25:44.306 }, 00:25:44.306 "method": "bdev_nvme_attach_controller" 00:25:44.306 },{ 00:25:44.306 "params": { 00:25:44.306 "name": "Nvme4", 00:25:44.306 "trtype": "tcp", 00:25:44.306 "traddr": "10.0.0.2", 00:25:44.306 "adrfam": "ipv4", 00:25:44.306 "trsvcid": "4420", 00:25:44.306 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:44.306 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:44.306 "hdgst": false, 00:25:44.306 "ddgst": false 00:25:44.306 }, 00:25:44.306 "method": "bdev_nvme_attach_controller" 00:25:44.306 },{ 00:25:44.306 "params": { 00:25:44.306 "name": "Nvme5", 00:25:44.306 "trtype": "tcp", 00:25:44.306 "traddr": "10.0.0.2", 00:25:44.306 "adrfam": "ipv4", 00:25:44.306 "trsvcid": "4420", 00:25:44.306 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:44.306 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:44.306 "hdgst": false, 00:25:44.306 "ddgst": false 00:25:44.306 }, 00:25:44.306 "method": "bdev_nvme_attach_controller" 00:25:44.306 },{ 00:25:44.306 "params": { 00:25:44.306 "name": "Nvme6", 00:25:44.306 "trtype": "tcp", 00:25:44.306 "traddr": "10.0.0.2", 00:25:44.306 "adrfam": "ipv4", 00:25:44.306 "trsvcid": "4420", 00:25:44.306 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:44.306 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:44.306 "hdgst": false, 00:25:44.306 "ddgst": false 00:25:44.306 }, 00:25:44.306 "method": "bdev_nvme_attach_controller" 00:25:44.306 },{ 00:25:44.306 "params": { 00:25:44.306 "name": "Nvme7", 00:25:44.306 "trtype": "tcp", 00:25:44.306 "traddr": "10.0.0.2", 00:25:44.306 "adrfam": "ipv4", 00:25:44.306 "trsvcid": "4420", 00:25:44.306 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:44.306 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:44.306 "hdgst": false, 00:25:44.306 "ddgst": false 00:25:44.306 }, 00:25:44.306 "method": "bdev_nvme_attach_controller" 00:25:44.306 },{ 00:25:44.306 "params": { 00:25:44.306 "name": "Nvme8", 00:25:44.306 "trtype": "tcp", 00:25:44.306 "traddr": "10.0.0.2", 00:25:44.306 "adrfam": "ipv4", 00:25:44.306 "trsvcid": "4420", 00:25:44.306 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:44.306 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:44.306 "hdgst": false, 00:25:44.306 "ddgst": false 00:25:44.306 }, 00:25:44.306 "method": "bdev_nvme_attach_controller" 00:25:44.306 },{ 00:25:44.306 "params": { 00:25:44.306 "name": "Nvme9", 00:25:44.306 "trtype": "tcp", 00:25:44.306 "traddr": "10.0.0.2", 00:25:44.306 "adrfam": "ipv4", 00:25:44.306 "trsvcid": "4420", 00:25:44.306 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:44.306 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:44.306 "hdgst": false, 00:25:44.306 "ddgst": false 00:25:44.306 }, 00:25:44.306 "method": "bdev_nvme_attach_controller" 00:25:44.306 },{ 00:25:44.306 "params": { 00:25:44.306 "name": "Nvme10", 00:25:44.306 "trtype": "tcp", 00:25:44.306 "traddr": "10.0.0.2", 00:25:44.306 "adrfam": "ipv4", 00:25:44.306 "trsvcid": "4420", 00:25:44.306 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:44.306 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:44.306 "hdgst": false, 00:25:44.306 "ddgst": false 00:25:44.306 }, 00:25:44.306 "method": "bdev_nvme_attach_controller" 00:25:44.306 }' 00:25:44.306 [2024-11-19 10:53:23.328460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.306 [2024-11-19 10:53:23.364620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.691 Running I/O for 1 seconds... 00:25:46.633 1809.00 IOPS, 113.06 MiB/s 00:25:46.633 Latency(us) 00:25:46.633 [2024-11-19T09:53:25.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.633 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.633 Verification LBA range: start 0x0 length 0x400 00:25:46.633 Nvme1n1 : 1.14 223.92 13.99 0.00 0.00 282325.12 22500.69 251658.24 00:25:46.633 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.633 Verification LBA range: start 0x0 length 0x400 00:25:46.633 Nvme2n1 : 1.14 225.45 14.09 0.00 0.00 276050.13 16056.32 253405.87 00:25:46.633 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.633 Verification LBA range: start 0x0 length 0x400 00:25:46.633 Nvme3n1 : 1.13 227.54 14.22 0.00 0.00 268525.65 18786.99 249910.61 00:25:46.633 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.633 Verification LBA range: start 0x0 length 0x400 00:25:46.633 Nvme4n1 : 1.12 228.61 14.29 0.00 0.00 262516.91 18350.08 260396.37 00:25:46.633 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.633 Verification LBA range: start 0x0 length 0x400 00:25:46.633 Nvme5n1 : 1.13 226.68 14.17 0.00 0.00 259944.96 17803.95 244667.73 00:25:46.633 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.633 Verification LBA range: start 0x0 length 0x400 00:25:46.633 Nvme6n1 : 1.14 224.14 14.01 0.00 0.00 258446.93 18240.85 251658.24 00:25:46.633 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.633 Verification LBA range: start 0x0 length 0x400 00:25:46.633 Nvme7n1 : 1.17 275.75 17.23 0.00 0.00 202389.01 9611.95 265639.25 00:25:46.633 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.633 Verification LBA range: start 0x0 length 0x400 00:25:46.633 Nvme8n1 : 1.15 278.64 17.42 0.00 0.00 200025.77 17367.04 246415.36 00:25:46.633 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.633 Verification LBA range: start 0x0 length 0x400 00:25:46.633 Nvme9n1 : 1.18 222.31 13.89 0.00 0.00 246029.84 2225.49 270882.13 00:25:46.633 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.633 Verification LBA range: start 0x0 length 0x400 00:25:46.633 Nvme10n1 : 1.19 268.15 16.76 0.00 0.00 201513.47 8792.75 269134.51 00:25:46.633 [2024-11-19T09:53:25.828Z] =================================================================================================================== 00:25:46.633 [2024-11-19T09:53:25.828Z] Total : 2401.19 150.07 0.00 0.00 242623.35 2225.49 270882.13 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:46.894 rmmod nvme_tcp 00:25:46.894 rmmod nvme_fabrics 00:25:46.894 rmmod nvme_keyring 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1087427 ']' 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1087427 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1087427 ']' 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1087427 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1087427 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1087427' 00:25:46.894 killing process with pid 1087427 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1087427 00:25:46.894 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1087427 00:25:47.156 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:47.156 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:47.156 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:47.156 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:25:47.156 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:25:47.156 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:25:47.156 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:47.156 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:47.156 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:47.156 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.156 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.156 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:49.705 00:25:49.705 real 0m16.717s 00:25:49.705 user 0m33.449s 00:25:49.705 sys 0m6.905s 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:49.705 ************************************ 00:25:49.705 END TEST nvmf_shutdown_tc1 00:25:49.705 ************************************ 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:49.705 ************************************ 00:25:49.705 START TEST nvmf_shutdown_tc2 00:25:49.705 ************************************ 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.705 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:49.706 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:49.706 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:49.706 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:49.706 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:49.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:25:49.706 00:25:49.706 --- 10.0.0.2 ping statistics --- 00:25:49.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.706 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:25:49.706 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:25:49.706 00:25:49.706 --- 10.0.0.1 ping statistics --- 00:25:49.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.706 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1089598 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1089598 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1089598 ']' 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:49.707 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:49.707 [2024-11-19 10:53:28.811498] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:25:49.707 [2024-11-19 10:53:28.811548] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.707 [2024-11-19 10:53:28.894649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:49.968 [2024-11-19 10:53:28.935095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.968 [2024-11-19 10:53:28.935132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.968 [2024-11-19 10:53:28.935143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.968 [2024-11-19 10:53:28.935151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.968 [2024-11-19 10:53:28.935164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.968 [2024-11-19 10:53:28.936827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.968 [2024-11-19 10:53:28.936978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:49.968 [2024-11-19 10:53:28.937131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:49.968 [2024-11-19 10:53:28.937132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:49.968 [2024-11-19 10:53:29.057070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.968 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:49.968 Malloc1 00:25:50.229 [2024-11-19 10:53:29.170784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.229 Malloc2 00:25:50.229 Malloc3 00:25:50.229 Malloc4 00:25:50.229 Malloc5 00:25:50.229 Malloc6 00:25:50.229 Malloc7 00:25:50.491 Malloc8 00:25:50.491 Malloc9 00:25:50.491 Malloc10 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1089656 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1089656 /var/tmp/bdevperf.sock 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1089656 ']' 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:50.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.491 { 00:25:50.491 "params": { 00:25:50.491 "name": "Nvme$subsystem", 00:25:50.491 "trtype": "$TEST_TRANSPORT", 00:25:50.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.491 "adrfam": "ipv4", 00:25:50.491 "trsvcid": "$NVMF_PORT", 00:25:50.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.491 "hdgst": ${hdgst:-false}, 00:25:50.491 "ddgst": ${ddgst:-false} 00:25:50.491 }, 00:25:50.491 "method": "bdev_nvme_attach_controller" 00:25:50.491 } 00:25:50.491 EOF 00:25:50.491 )") 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.491 { 00:25:50.491 "params": { 00:25:50.491 "name": "Nvme$subsystem", 00:25:50.491 "trtype": "$TEST_TRANSPORT", 00:25:50.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.491 "adrfam": "ipv4", 00:25:50.491 "trsvcid": "$NVMF_PORT", 00:25:50.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.491 "hdgst": ${hdgst:-false}, 00:25:50.491 "ddgst": ${ddgst:-false} 00:25:50.491 }, 00:25:50.491 "method": "bdev_nvme_attach_controller" 00:25:50.491 } 00:25:50.491 EOF 00:25:50.491 )") 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.491 { 00:25:50.491 "params": { 00:25:50.491 "name": "Nvme$subsystem", 00:25:50.491 "trtype": "$TEST_TRANSPORT", 00:25:50.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.491 "adrfam": "ipv4", 00:25:50.491 "trsvcid": "$NVMF_PORT", 00:25:50.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.491 "hdgst": ${hdgst:-false}, 00:25:50.491 "ddgst": ${ddgst:-false} 00:25:50.491 }, 00:25:50.491 "method": "bdev_nvme_attach_controller" 00:25:50.491 } 00:25:50.491 EOF 00:25:50.491 )") 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.491 { 00:25:50.491 "params": { 00:25:50.491 "name": "Nvme$subsystem", 00:25:50.491 "trtype": "$TEST_TRANSPORT", 00:25:50.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.491 "adrfam": "ipv4", 00:25:50.491 "trsvcid": "$NVMF_PORT", 00:25:50.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.491 "hdgst": ${hdgst:-false}, 00:25:50.491 "ddgst": ${ddgst:-false} 00:25:50.491 }, 00:25:50.491 "method": "bdev_nvme_attach_controller" 00:25:50.491 } 00:25:50.491 EOF 00:25:50.491 )") 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.491 { 00:25:50.491 "params": { 00:25:50.491 "name": "Nvme$subsystem", 00:25:50.491 "trtype": "$TEST_TRANSPORT", 00:25:50.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.491 "adrfam": "ipv4", 00:25:50.491 "trsvcid": "$NVMF_PORT", 00:25:50.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.491 "hdgst": ${hdgst:-false}, 00:25:50.491 "ddgst": ${ddgst:-false} 00:25:50.491 }, 00:25:50.491 "method": "bdev_nvme_attach_controller" 00:25:50.491 } 00:25:50.491 EOF 00:25:50.491 )") 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.491 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.492 { 00:25:50.492 "params": { 00:25:50.492 "name": "Nvme$subsystem", 00:25:50.492 "trtype": "$TEST_TRANSPORT", 00:25:50.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.492 "adrfam": "ipv4", 00:25:50.492 "trsvcid": "$NVMF_PORT", 00:25:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.492 "hdgst": ${hdgst:-false}, 00:25:50.492 "ddgst": ${ddgst:-false} 00:25:50.492 }, 00:25:50.492 "method": "bdev_nvme_attach_controller" 00:25:50.492 } 00:25:50.492 EOF 00:25:50.492 )") 00:25:50.492 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.492 [2024-11-19 10:53:29.618061] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:25:50.492 [2024-11-19 10:53:29.618112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089656 ] 00:25:50.492 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.492 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.492 { 00:25:50.492 "params": { 00:25:50.492 "name": "Nvme$subsystem", 00:25:50.492 "trtype": "$TEST_TRANSPORT", 00:25:50.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.492 "adrfam": "ipv4", 00:25:50.492 "trsvcid": "$NVMF_PORT", 00:25:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.492 "hdgst": ${hdgst:-false}, 00:25:50.492 "ddgst": ${ddgst:-false} 00:25:50.492 }, 00:25:50.492 "method": "bdev_nvme_attach_controller" 00:25:50.492 } 00:25:50.492 EOF 00:25:50.492 )") 00:25:50.492 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.492 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.492 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.492 { 00:25:50.492 "params": { 00:25:50.492 "name": "Nvme$subsystem", 00:25:50.492 "trtype": "$TEST_TRANSPORT", 00:25:50.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.492 "adrfam": "ipv4", 00:25:50.492 "trsvcid": "$NVMF_PORT", 00:25:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.492 "hdgst": ${hdgst:-false}, 00:25:50.492 "ddgst": ${ddgst:-false} 00:25:50.492 }, 00:25:50.492 "method": "bdev_nvme_attach_controller" 00:25:50.492 } 00:25:50.492 EOF 00:25:50.492 )") 00:25:50.492 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.492 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.492 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.492 { 00:25:50.492 "params": { 00:25:50.492 "name": "Nvme$subsystem", 00:25:50.492 "trtype": "$TEST_TRANSPORT", 00:25:50.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.492 "adrfam": "ipv4", 00:25:50.492 "trsvcid": "$NVMF_PORT", 00:25:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.492 "hdgst": ${hdgst:-false}, 00:25:50.492 "ddgst": ${ddgst:-false} 00:25:50.492 }, 00:25:50.492 "method": "bdev_nvme_attach_controller" 00:25:50.492 } 00:25:50.492 EOF 00:25:50.492 )") 00:25:50.492 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.492 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.492 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.492 { 00:25:50.492 "params": { 00:25:50.492 "name": "Nvme$subsystem", 00:25:50.492 "trtype": "$TEST_TRANSPORT", 00:25:50.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.492 "adrfam": "ipv4", 00:25:50.492 "trsvcid": "$NVMF_PORT", 00:25:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.492 "hdgst": ${hdgst:-false}, 00:25:50.492 "ddgst": ${ddgst:-false} 00:25:50.492 }, 00:25:50.492 "method": "bdev_nvme_attach_controller" 00:25:50.492 } 00:25:50.492 EOF 00:25:50.492 )") 00:25:50.492 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.492 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:25:50.492 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:25:50.492 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:50.492 "params": { 00:25:50.492 "name": "Nvme1", 00:25:50.492 "trtype": "tcp", 00:25:50.492 "traddr": "10.0.0.2", 00:25:50.492 "adrfam": "ipv4", 00:25:50.492 "trsvcid": "4420", 00:25:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:50.492 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:50.492 "hdgst": false, 00:25:50.492 "ddgst": false 00:25:50.492 }, 00:25:50.492 "method": "bdev_nvme_attach_controller" 00:25:50.492 },{ 00:25:50.492 "params": { 00:25:50.492 "name": "Nvme2", 00:25:50.492 "trtype": "tcp", 00:25:50.492 "traddr": "10.0.0.2", 00:25:50.492 "adrfam": "ipv4", 00:25:50.492 "trsvcid": "4420", 00:25:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:50.492 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:50.492 "hdgst": false, 00:25:50.492 "ddgst": false 00:25:50.492 }, 00:25:50.492 "method": "bdev_nvme_attach_controller" 00:25:50.492 },{ 00:25:50.492 "params": { 00:25:50.492 "name": "Nvme3", 00:25:50.492 "trtype": "tcp", 00:25:50.492 "traddr": "10.0.0.2", 00:25:50.492 "adrfam": "ipv4", 00:25:50.492 "trsvcid": "4420", 00:25:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:50.492 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:50.492 "hdgst": false, 00:25:50.492 "ddgst": false 00:25:50.492 }, 00:25:50.492 "method": "bdev_nvme_attach_controller" 00:25:50.492 },{ 00:25:50.492 "params": { 00:25:50.492 "name": "Nvme4", 00:25:50.492 "trtype": "tcp", 00:25:50.492 "traddr": "10.0.0.2", 00:25:50.492 "adrfam": "ipv4", 00:25:50.492 "trsvcid": "4420", 00:25:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:50.492 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:50.492 "hdgst": false, 00:25:50.492 "ddgst": false 00:25:50.492 }, 00:25:50.492 "method": "bdev_nvme_attach_controller" 00:25:50.492 },{ 00:25:50.492 "params": { 00:25:50.492 "name": "Nvme5", 00:25:50.492 "trtype": "tcp", 00:25:50.492 "traddr": "10.0.0.2", 00:25:50.492 "adrfam": "ipv4", 00:25:50.492 "trsvcid": "4420", 00:25:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:50.492 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:50.492 "hdgst": false, 00:25:50.492 "ddgst": false 00:25:50.492 }, 00:25:50.492 "method": "bdev_nvme_attach_controller" 00:25:50.492 },{ 00:25:50.492 "params": { 00:25:50.492 "name": "Nvme6", 00:25:50.492 "trtype": "tcp", 00:25:50.492 "traddr": "10.0.0.2", 00:25:50.492 "adrfam": "ipv4", 00:25:50.492 "trsvcid": "4420", 00:25:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:50.492 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:50.492 "hdgst": false, 00:25:50.492 "ddgst": false 00:25:50.492 }, 00:25:50.492 "method": "bdev_nvme_attach_controller" 00:25:50.492 },{ 00:25:50.492 "params": { 00:25:50.492 "name": "Nvme7", 00:25:50.492 "trtype": "tcp", 00:25:50.492 "traddr": "10.0.0.2", 00:25:50.492 "adrfam": "ipv4", 00:25:50.492 "trsvcid": "4420", 00:25:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:50.492 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:50.492 "hdgst": false, 00:25:50.492 "ddgst": false 00:25:50.492 }, 00:25:50.492 "method": "bdev_nvme_attach_controller" 00:25:50.492 },{ 00:25:50.492 "params": { 00:25:50.492 "name": "Nvme8", 00:25:50.492 "trtype": "tcp", 00:25:50.492 "traddr": "10.0.0.2", 00:25:50.492 "adrfam": "ipv4", 00:25:50.492 "trsvcid": "4420", 00:25:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:50.492 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:50.492 "hdgst": false, 00:25:50.492 "ddgst": false 00:25:50.492 }, 00:25:50.492 "method": "bdev_nvme_attach_controller" 00:25:50.492 },{ 00:25:50.492 "params": { 00:25:50.492 "name": "Nvme9", 00:25:50.492 "trtype": "tcp", 00:25:50.492 "traddr": "10.0.0.2", 00:25:50.492 "adrfam": "ipv4", 00:25:50.492 "trsvcid": "4420", 00:25:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:50.492 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:50.492 "hdgst": false, 00:25:50.492 "ddgst": false 00:25:50.492 }, 00:25:50.492 "method": "bdev_nvme_attach_controller" 00:25:50.492 },{ 00:25:50.492 "params": { 00:25:50.492 "name": "Nvme10", 00:25:50.492 "trtype": "tcp", 00:25:50.492 "traddr": "10.0.0.2", 00:25:50.492 "adrfam": "ipv4", 00:25:50.492 "trsvcid": "4420", 00:25:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:50.492 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:50.492 "hdgst": false, 00:25:50.492 "ddgst": false 00:25:50.493 }, 00:25:50.493 "method": "bdev_nvme_attach_controller" 00:25:50.493 }' 00:25:50.753 [2024-11-19 10:53:29.706298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.753 [2024-11-19 10:53:29.742607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.138 Running I/O for 10 seconds... 00:25:52.138 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:52.138 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:52.138 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:52.138 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.138 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:52.399 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.399 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:52.399 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:52.399 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:52.399 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:25:52.399 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:25:52.399 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:52.399 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:52.399 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:52.399 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:52.399 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.399 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:52.400 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.400 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:25:52.400 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:25:52.400 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:52.661 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:52.661 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:52.661 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:52.661 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:52.661 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.661 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:52.661 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.661 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:25:52.661 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:25:52.661 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:52.923 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:52.923 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:52.923 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:52.923 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:52.923 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.923 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:52.923 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.923 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:52.923 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:52.923 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:25:52.923 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:25:52.923 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:25:52.923 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1089656 00:25:52.923 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1089656 ']' 00:25:52.923 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1089656 00:25:52.923 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:25:52.923 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.923 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1089656 00:25:53.185 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:53.185 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:53.185 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1089656' 00:25:53.185 killing process with pid 1089656 00:25:53.185 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1089656 00:25:53.185 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1089656 00:25:53.185 2253.00 IOPS, 140.81 MiB/s [2024-11-19T09:53:32.380Z] Received shutdown signal, test time was about 1.024391 seconds 00:25:53.185 00:25:53.185 Latency(us) 00:25:53.185 [2024-11-19T09:53:32.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.185 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.185 Verification LBA range: start 0x0 length 0x400 00:25:53.185 Nvme1n1 : 1.00 255.80 15.99 0.00 0.00 246755.20 20425.39 251658.24 00:25:53.185 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.185 Verification LBA range: start 0x0 length 0x400 00:25:53.185 Nvme2n1 : 1.02 250.70 15.67 0.00 0.00 246830.83 13871.79 235929.60 00:25:53.185 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.185 Verification LBA range: start 0x0 length 0x400 00:25:53.185 Nvme3n1 : 0.99 259.81 16.24 0.00 0.00 233922.77 19223.89 267386.88 00:25:53.185 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.185 Verification LBA range: start 0x0 length 0x400 00:25:53.185 Nvme4n1 : 1.00 261.17 16.32 0.00 0.00 226770.42 6608.21 200103.25 00:25:53.185 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.185 Verification LBA range: start 0x0 length 0x400 00:25:53.185 Nvme5n1 : 0.99 257.82 16.11 0.00 0.00 226528.00 16274.77 284863.15 00:25:53.185 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.185 Verification LBA range: start 0x0 length 0x400 00:25:53.185 Nvme6n1 : 0.98 196.40 12.28 0.00 0.00 290602.95 19660.80 255153.49 00:25:53.185 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.185 Verification LBA range: start 0x0 length 0x400 00:25:53.185 Nvme7n1 : 1.00 256.57 16.04 0.00 0.00 218294.40 20425.39 262144.00 00:25:53.185 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.185 Verification LBA range: start 0x0 length 0x400 00:25:53.185 Nvme8n1 : 1.02 250.12 15.63 0.00 0.00 219903.36 15182.51 255153.49 00:25:53.185 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.185 Verification LBA range: start 0x0 length 0x400 00:25:53.185 Nvme9n1 : 0.98 201.09 12.57 0.00 0.00 263863.83 5515.95 253405.87 00:25:53.185 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.185 Verification LBA range: start 0x0 length 0x400 00:25:53.185 Nvme10n1 : 0.99 193.80 12.11 0.00 0.00 270034.49 18568.53 274377.39 00:25:53.185 [2024-11-19T09:53:32.380Z] =================================================================================================================== 00:25:53.185 [2024-11-19T09:53:32.380Z] Total : 2383.29 148.96 0.00 0.00 241899.84 5515.95 284863.15 00:25:53.185 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:25:54.570 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1089598 00:25:54.570 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:25:54.570 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:54.570 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:54.570 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:54.570 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:54.570 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:54.570 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:25:54.570 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:54.570 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:25:54.570 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:54.570 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:54.571 rmmod nvme_tcp 00:25:54.571 rmmod nvme_fabrics 00:25:54.571 rmmod nvme_keyring 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1089598 ']' 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1089598 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1089598 ']' 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1089598 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1089598 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1089598' 00:25:54.571 killing process with pid 1089598 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1089598 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1089598 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.571 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:57.121 00:25:57.121 real 0m7.439s 00:25:57.121 user 0m22.043s 00:25:57.121 sys 0m1.277s 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:57.121 ************************************ 00:25:57.121 END TEST nvmf_shutdown_tc2 00:25:57.121 ************************************ 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:57.121 ************************************ 00:25:57.121 START TEST nvmf_shutdown_tc3 00:25:57.121 ************************************ 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:57.121 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:57.122 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:57.122 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:57.122 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:57.122 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:57.122 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:57.122 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:57.122 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.122 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:57.122 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:57.122 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:57.122 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:57.122 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:57.122 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:57.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:25:57.122 00:25:57.122 --- 10.0.0.2 ping statistics --- 00:25:57.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.122 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:25:57.122 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:57.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:25:57.122 00:25:57.122 --- 10.0.0.1 ping statistics --- 00:25:57.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.122 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1091121 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1091121 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1091121 ']' 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.123 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:57.384 [2024-11-19 10:53:36.328507] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:25:57.384 [2024-11-19 10:53:36.328573] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.384 [2024-11-19 10:53:36.425214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:57.384 [2024-11-19 10:53:36.464239] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.384 [2024-11-19 10:53:36.464272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.384 [2024-11-19 10:53:36.464278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:57.384 [2024-11-19 10:53:36.464283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:57.384 [2024-11-19 10:53:36.464287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.384 [2024-11-19 10:53:36.466043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.384 [2024-11-19 10:53:36.466213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:57.384 [2024-11-19 10:53:36.466368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.384 [2024-11-19 10:53:36.466369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:57.956 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:57.956 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:25:57.956 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:57.956 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:57.956 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.218 [2024-11-19 10:53:37.178671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.218 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.218 Malloc1 00:25:58.218 [2024-11-19 10:53:37.287923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.218 Malloc2 00:25:58.218 Malloc3 00:25:58.218 Malloc4 00:25:58.478 Malloc5 00:25:58.478 Malloc6 00:25:58.478 Malloc7 00:25:58.478 Malloc8 00:25:58.478 Malloc9 00:25:58.478 Malloc10 00:25:58.478 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.478 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:58.478 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:58.478 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1091507 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1091507 /var/tmp/bdevperf.sock 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1091507 ']' 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:58.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:58.740 { 00:25:58.740 "params": { 00:25:58.740 "name": "Nvme$subsystem", 00:25:58.740 "trtype": "$TEST_TRANSPORT", 00:25:58.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.740 "adrfam": "ipv4", 00:25:58.740 "trsvcid": "$NVMF_PORT", 00:25:58.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.740 "hdgst": ${hdgst:-false}, 00:25:58.740 "ddgst": ${ddgst:-false} 00:25:58.740 }, 00:25:58.740 "method": "bdev_nvme_attach_controller" 00:25:58.740 } 00:25:58.740 EOF 00:25:58.740 )") 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:58.740 { 00:25:58.740 "params": { 00:25:58.740 "name": "Nvme$subsystem", 00:25:58.740 "trtype": "$TEST_TRANSPORT", 00:25:58.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.740 "adrfam": "ipv4", 00:25:58.740 "trsvcid": "$NVMF_PORT", 00:25:58.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.740 "hdgst": ${hdgst:-false}, 00:25:58.740 "ddgst": ${ddgst:-false} 00:25:58.740 }, 00:25:58.740 "method": "bdev_nvme_attach_controller" 00:25:58.740 } 00:25:58.740 EOF 00:25:58.740 )") 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:58.740 { 00:25:58.740 "params": { 00:25:58.740 "name": "Nvme$subsystem", 00:25:58.740 "trtype": "$TEST_TRANSPORT", 00:25:58.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.740 "adrfam": "ipv4", 00:25:58.740 "trsvcid": "$NVMF_PORT", 00:25:58.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.740 "hdgst": ${hdgst:-false}, 00:25:58.740 "ddgst": ${ddgst:-false} 00:25:58.740 }, 00:25:58.740 "method": "bdev_nvme_attach_controller" 00:25:58.740 } 00:25:58.740 EOF 00:25:58.740 )") 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:58.740 { 00:25:58.740 "params": { 00:25:58.740 "name": "Nvme$subsystem", 00:25:58.740 "trtype": "$TEST_TRANSPORT", 00:25:58.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.740 "adrfam": "ipv4", 00:25:58.740 "trsvcid": "$NVMF_PORT", 00:25:58.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.740 "hdgst": ${hdgst:-false}, 00:25:58.740 "ddgst": ${ddgst:-false} 00:25:58.740 }, 00:25:58.740 "method": "bdev_nvme_attach_controller" 00:25:58.740 } 00:25:58.740 EOF 00:25:58.740 )") 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:58.740 { 00:25:58.740 "params": { 00:25:58.740 "name": "Nvme$subsystem", 00:25:58.740 "trtype": "$TEST_TRANSPORT", 00:25:58.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.740 "adrfam": "ipv4", 00:25:58.740 "trsvcid": "$NVMF_PORT", 00:25:58.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.740 "hdgst": ${hdgst:-false}, 00:25:58.740 "ddgst": ${ddgst:-false} 00:25:58.740 }, 00:25:58.740 "method": "bdev_nvme_attach_controller" 00:25:58.740 } 00:25:58.740 EOF 00:25:58.740 )") 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:58.740 { 00:25:58.740 "params": { 00:25:58.740 "name": "Nvme$subsystem", 00:25:58.740 "trtype": "$TEST_TRANSPORT", 00:25:58.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.740 "adrfam": "ipv4", 00:25:58.740 "trsvcid": "$NVMF_PORT", 00:25:58.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.740 "hdgst": ${hdgst:-false}, 00:25:58.740 "ddgst": ${ddgst:-false} 00:25:58.740 }, 00:25:58.740 "method": "bdev_nvme_attach_controller" 00:25:58.740 } 00:25:58.740 EOF 00:25:58.740 )") 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:58.740 [2024-11-19 10:53:37.732938] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:25:58.740 [2024-11-19 10:53:37.732991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091507 ] 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:58.740 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:58.740 { 00:25:58.740 "params": { 00:25:58.740 "name": "Nvme$subsystem", 00:25:58.740 "trtype": "$TEST_TRANSPORT", 00:25:58.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.740 "adrfam": "ipv4", 00:25:58.740 "trsvcid": "$NVMF_PORT", 00:25:58.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.740 "hdgst": ${hdgst:-false}, 00:25:58.740 "ddgst": ${ddgst:-false} 00:25:58.740 }, 00:25:58.740 "method": "bdev_nvme_attach_controller" 00:25:58.740 } 00:25:58.741 EOF 00:25:58.741 )") 00:25:58.741 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:58.741 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:58.741 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:58.741 { 00:25:58.741 "params": { 00:25:58.741 "name": "Nvme$subsystem", 00:25:58.741 "trtype": "$TEST_TRANSPORT", 00:25:58.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.741 "adrfam": "ipv4", 00:25:58.741 "trsvcid": "$NVMF_PORT", 00:25:58.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.741 "hdgst": ${hdgst:-false}, 00:25:58.741 "ddgst": ${ddgst:-false} 00:25:58.741 }, 00:25:58.741 "method": "bdev_nvme_attach_controller" 00:25:58.741 } 00:25:58.741 EOF 00:25:58.741 )") 00:25:58.741 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:58.741 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:58.741 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:58.741 { 00:25:58.741 "params": { 00:25:58.741 "name": "Nvme$subsystem", 00:25:58.741 "trtype": "$TEST_TRANSPORT", 00:25:58.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.741 "adrfam": "ipv4", 00:25:58.741 "trsvcid": "$NVMF_PORT", 00:25:58.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.741 "hdgst": ${hdgst:-false}, 00:25:58.741 "ddgst": ${ddgst:-false} 00:25:58.741 }, 00:25:58.741 "method": "bdev_nvme_attach_controller" 00:25:58.741 } 00:25:58.741 EOF 00:25:58.741 )") 00:25:58.741 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:58.741 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:58.741 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:58.741 { 00:25:58.741 "params": { 00:25:58.741 "name": "Nvme$subsystem", 00:25:58.741 "trtype": "$TEST_TRANSPORT", 00:25:58.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.741 "adrfam": "ipv4", 00:25:58.741 "trsvcid": "$NVMF_PORT", 00:25:58.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.741 "hdgst": ${hdgst:-false}, 00:25:58.741 "ddgst": ${ddgst:-false} 00:25:58.741 }, 00:25:58.741 "method": "bdev_nvme_attach_controller" 00:25:58.741 } 00:25:58.741 EOF 00:25:58.741 )") 00:25:58.741 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:58.741 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:25:58.741 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:25:58.741 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:58.741 "params": { 00:25:58.741 "name": "Nvme1", 00:25:58.741 "trtype": "tcp", 00:25:58.741 "traddr": "10.0.0.2", 00:25:58.741 "adrfam": "ipv4", 00:25:58.741 "trsvcid": "4420", 00:25:58.741 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:58.741 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:58.741 "hdgst": false, 00:25:58.741 "ddgst": false 00:25:58.741 }, 00:25:58.741 "method": "bdev_nvme_attach_controller" 00:25:58.741 },{ 00:25:58.741 "params": { 00:25:58.741 "name": "Nvme2", 00:25:58.741 "trtype": "tcp", 00:25:58.741 "traddr": "10.0.0.2", 00:25:58.741 "adrfam": "ipv4", 00:25:58.741 "trsvcid": "4420", 00:25:58.741 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:58.741 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:58.741 "hdgst": false, 00:25:58.741 "ddgst": false 00:25:58.741 }, 00:25:58.741 "method": "bdev_nvme_attach_controller" 00:25:58.741 },{ 00:25:58.741 "params": { 00:25:58.741 "name": "Nvme3", 00:25:58.741 "trtype": "tcp", 00:25:58.741 "traddr": "10.0.0.2", 00:25:58.741 "adrfam": "ipv4", 00:25:58.741 "trsvcid": "4420", 00:25:58.741 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:58.741 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:58.741 "hdgst": false, 00:25:58.741 "ddgst": false 00:25:58.741 }, 00:25:58.741 "method": "bdev_nvme_attach_controller" 00:25:58.741 },{ 00:25:58.741 "params": { 00:25:58.741 "name": "Nvme4", 00:25:58.741 "trtype": "tcp", 00:25:58.741 "traddr": "10.0.0.2", 00:25:58.741 "adrfam": "ipv4", 00:25:58.741 "trsvcid": "4420", 00:25:58.741 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:58.741 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:58.741 "hdgst": false, 00:25:58.741 "ddgst": false 00:25:58.741 }, 00:25:58.741 "method": "bdev_nvme_attach_controller" 00:25:58.741 },{ 00:25:58.741 "params": { 00:25:58.741 "name": "Nvme5", 00:25:58.741 "trtype": "tcp", 00:25:58.741 "traddr": "10.0.0.2", 00:25:58.741 "adrfam": "ipv4", 00:25:58.741 "trsvcid": "4420", 00:25:58.741 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:58.741 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:58.741 "hdgst": false, 00:25:58.741 "ddgst": false 00:25:58.741 }, 00:25:58.741 "method": "bdev_nvme_attach_controller" 00:25:58.741 },{ 00:25:58.741 "params": { 00:25:58.741 "name": "Nvme6", 00:25:58.741 "trtype": "tcp", 00:25:58.741 "traddr": "10.0.0.2", 00:25:58.741 "adrfam": "ipv4", 00:25:58.741 "trsvcid": "4420", 00:25:58.741 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:58.741 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:58.741 "hdgst": false, 00:25:58.741 "ddgst": false 00:25:58.741 }, 00:25:58.741 "method": "bdev_nvme_attach_controller" 00:25:58.741 },{ 00:25:58.741 "params": { 00:25:58.741 "name": "Nvme7", 00:25:58.741 "trtype": "tcp", 00:25:58.741 "traddr": "10.0.0.2", 00:25:58.741 "adrfam": "ipv4", 00:25:58.741 "trsvcid": "4420", 00:25:58.741 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:58.741 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:58.741 "hdgst": false, 00:25:58.741 "ddgst": false 00:25:58.741 }, 00:25:58.741 "method": "bdev_nvme_attach_controller" 00:25:58.741 },{ 00:25:58.741 "params": { 00:25:58.741 "name": "Nvme8", 00:25:58.741 "trtype": "tcp", 00:25:58.741 "traddr": "10.0.0.2", 00:25:58.741 "adrfam": "ipv4", 00:25:58.741 "trsvcid": "4420", 00:25:58.741 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:58.741 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:58.741 "hdgst": false, 00:25:58.741 "ddgst": false 00:25:58.741 }, 00:25:58.741 "method": "bdev_nvme_attach_controller" 00:25:58.741 },{ 00:25:58.741 "params": { 00:25:58.741 "name": "Nvme9", 00:25:58.741 "trtype": "tcp", 00:25:58.741 "traddr": "10.0.0.2", 00:25:58.741 "adrfam": "ipv4", 00:25:58.741 "trsvcid": "4420", 00:25:58.741 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:58.741 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:58.741 "hdgst": false, 00:25:58.741 "ddgst": false 00:25:58.741 }, 00:25:58.741 "method": "bdev_nvme_attach_controller" 00:25:58.741 },{ 00:25:58.741 "params": { 00:25:58.741 "name": "Nvme10", 00:25:58.741 "trtype": "tcp", 00:25:58.741 "traddr": "10.0.0.2", 00:25:58.741 "adrfam": "ipv4", 00:25:58.741 "trsvcid": "4420", 00:25:58.741 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:58.741 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:58.741 "hdgst": false, 00:25:58.741 "ddgst": false 00:25:58.741 }, 00:25:58.741 "method": "bdev_nvme_attach_controller" 00:25:58.741 }' 00:25:58.741 [2024-11-19 10:53:37.821828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.741 [2024-11-19 10:53:37.858066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.654 Running I/O for 10 seconds... 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:26:01.226 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:01.488 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:01.488 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:01.488 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:01.488 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:01.488 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.488 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:01.488 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.488 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:26:01.488 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:26:01.488 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:26:01.488 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:26:01.488 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:26:01.488 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1091121 00:26:01.488 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1091121 ']' 00:26:01.488 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1091121 00:26:01.488 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:26:01.488 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:01.488 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1091121 00:26:01.761 1796.00 IOPS, 112.25 MiB/s [2024-11-19T09:53:40.956Z] 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:01.761 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:01.761 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1091121' 00:26:01.761 killing process with pid 1091121 00:26:01.761 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1091121 00:26:01.761 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1091121 00:26:01.761 [2024-11-19 10:53:40.726211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.761 [2024-11-19 10:53:40.726418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c4e0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.762 [2024-11-19 10:53:40.726735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.726746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.762 [2024-11-19 10:53:40.726754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.726763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.762 [2024-11-19 10:53:40.726771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.726779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.762 [2024-11-19 10:53:40.726792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.726800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1641cb0 is same with the state(6) to be set 00:26:01.762 [2024-11-19 10:53:40.726904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.726915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.726930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.726937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.726949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.726957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.726966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.726974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.726983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.726991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.727000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.727007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.727017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.727025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.727034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.727042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.727051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.727059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.727068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.727076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.727086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.727093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.727102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.727112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.727122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.727130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.727139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.727146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.727155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.727171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.727180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.727188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.727198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.727205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.727214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.727222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.727231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.727239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.727248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.727255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.762 [2024-11-19 10:53:40.727265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.762 [2024-11-19 10:53:40.727272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.763 [2024-11-19 10:53:40.727826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:1[2024-11-19 10:53:40.727827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 the state(6) to be set 00:26:01.763 [2024-11-19 10:53:40.727835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.763 [2024-11-19 10:53:40.727836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.763 [2024-11-19 10:53:40.727846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1[2024-11-19 10:53:40.727847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 the state(6) to be set 00:26:01.763 [2024-11-19 10:53:40.727855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.763 [2024-11-19 10:53:40.727856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.763 [2024-11-19 10:53:40.727860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.763 [2024-11-19 10:53:40.727866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.763 [2024-11-19 10:53:40.727866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.763 [2024-11-19 10:53:40.727871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.763 [2024-11-19 10:53:40.727874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.764 [2024-11-19 10:53:40.727876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.727882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.727884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.764 [2024-11-19 10:53:40.727886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.727894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.727894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.764 [2024-11-19 10:53:40.727899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.727905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with [2024-11-19 10:53:40.727904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1the state(6) to be set 00:26:01.764 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.764 [2024-11-19 10:53:40.727912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.727914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.764 [2024-11-19 10:53:40.727917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.727922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.727925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.764 [2024-11-19 10:53:40.727927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.727933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with [2024-11-19 10:53:40.727933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:26:01.764 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.764 [2024-11-19 10:53:40.727941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.727945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.764 [2024-11-19 10:53:40.727947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.727953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 10:53:40.727954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.764 the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.727961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.727966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1[2024-11-19 10:53:40.727967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.764 the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.727974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.727975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.764 [2024-11-19 10:53:40.727979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.727984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.727985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.764 [2024-11-19 10:53:40.727989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.727994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.764 [2024-11-19 10:53:40.727996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.764 [2024-11-19 10:53:40.728008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with [2024-11-19 10:53:40.728013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:26:01.764 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.764 [2024-11-19 10:53:40.728021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with [2024-11-19 10:53:40.728025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1the state(6) to be set 00:26:01.764 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.764 [2024-11-19 10:53:40.728033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.764 [2024-11-19 10:53:40.728038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2f9f0 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.728180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e600 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.729565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.729588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.729596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.729601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.729606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.729611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.729616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.729620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.729625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.729630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.764 [2024-11-19 10:53:40.729635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ead0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.729935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:01.765 [2024-11-19 10:53:40.729967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1641cb0 (9): Bad file descriptor 00:26:01.765 [2024-11-19 10:53:40.730902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.730926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.730936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.730941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.730946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.730951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.730956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.730961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.730966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.730970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.730975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.730980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.730984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.730989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.730994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.730998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.731004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.731008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.731013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.731018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.731023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.731027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.731032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.731036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.765 [2024-11-19 10:53:40.731045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176efc0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.766 [2024-11-19 10:53:40.731458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1641cb0 with addr=10.0.0.2, port=4420 00:26:01.766 [2024-11-19 10:53:40.731470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1641cb0 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731820] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:01.766 [2024-11-19 10:53:40.731836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1641cb0 (9): Bad file descriptor 00:26:01.766 [2024-11-19 10:53:40.731855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731901] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:01.766 [2024-11-19 10:53:40.731905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.766 [2024-11-19 10:53:40.731993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.731997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f490 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:01.767 [2024-11-19 10:53:40.732182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:01.767 [2024-11-19 10:53:40.732191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:01.767 [2024-11-19 10:53:40.732202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:01.767 [2024-11-19 10:53:40.732579] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:01.767 [2024-11-19 10:53:40.732740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732842] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:01.767 [2024-11-19 10:53:40.732851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.732899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.767 [2024-11-19 10:53:40.736298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.767 [2024-11-19 10:53:40.736323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.767 [2024-11-19 10:53:40.736337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.767 [2024-11-19 10:53:40.736345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.767 [2024-11-19 10:53:40.736356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.767 [2024-11-19 10:53:40.736365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.767 [2024-11-19 10:53:40.736375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.767 [2024-11-19 10:53:40.736382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.767 [2024-11-19 10:53:40.736392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.767 [2024-11-19 10:53:40.736399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.767 [2024-11-19 10:53:40.736409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.767 [2024-11-19 10:53:40.736417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.767 [2024-11-19 10:53:40.736426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.767 [2024-11-19 10:53:40.736433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.767 [2024-11-19 10:53:40.736444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.767 [2024-11-19 10:53:40.736452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.767 [2024-11-19 10:53:40.736462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.767 [2024-11-19 10:53:40.736470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.767 [2024-11-19 10:53:40.736479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.767 [2024-11-19 10:53:40.736487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.736982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.736990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.737000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.737008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.737017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.737025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.737034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.737041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.737051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.737058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.737068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.737076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.737085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.737092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.737102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.737109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.737118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.737126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.737136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.737143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.768 [2024-11-19 10:53:40.737154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.768 [2024-11-19 10:53:40.737178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.769 [2024-11-19 10:53:40.737195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.769 [2024-11-19 10:53:40.737218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.769 [2024-11-19 10:53:40.737235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.769 [2024-11-19 10:53:40.737253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.769 [2024-11-19 10:53:40.737271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.769 [2024-11-19 10:53:40.737288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.769 [2024-11-19 10:53:40.737307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.769 [2024-11-19 10:53:40.737324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.769 [2024-11-19 10:53:40.737341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.769 [2024-11-19 10:53:40.737359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.769 [2024-11-19 10:53:40.737376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.769 [2024-11-19 10:53:40.737394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.769 [2024-11-19 10:53:40.737412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.769 [2024-11-19 10:53:40.737429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.769 [2024-11-19 10:53:40.737448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43510 is same with the state(6) to be set 00:26:01.769 [2024-11-19 10:53:40.737621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.737637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.737654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.737672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.737692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6d070 is same with the state(6) to be set 00:26:01.769 [2024-11-19 10:53:40.737727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.737739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.737766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.737786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.737802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163e420 is same with the state(6) to be set 00:26:01.769 [2024-11-19 10:53:40.737861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.737871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.737887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.737906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.737927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abb590 is same with the state(6) to be set 00:26:01.769 [2024-11-19 10:53:40.737963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.737972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.737988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.737997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.738004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.738012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.738019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.738026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163ffc0 is same with the state(6) to be set 00:26:01.769 [2024-11-19 10:53:40.738048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.738059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.738068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.738076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.738085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.738093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.738101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.769 [2024-11-19 10:53:40.738109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.769 [2024-11-19 10:53:40.738116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16389f0 is same with the state(6) to be set 00:26:01.769 [2024-11-19 10:53:40.739978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:26:01.769 [2024-11-19 10:53:40.740002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6d070 (9): Bad file descriptor 00:26:01.769 [2024-11-19 10:53:40.740815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.769 [2024-11-19 10:53:40.740838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6d070 with addr=10.0.0.2, port=4420 00:26:01.769 [2024-11-19 10:53:40.740851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6d070 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.740975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6d070 (9): Bad file descriptor 00:26:01.770 [2024-11-19 10:53:40.741201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:01.770 [2024-11-19 10:53:40.741223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:26:01.770 [2024-11-19 10:53:40.741231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:26:01.770 [2024-11-19 10:53:40.741239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:01.770 [2024-11-19 10:53:40.741246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:26:01.770 [2024-11-19 10:53:40.741615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.770 [2024-11-19 10:53:40.741630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1641cb0 with addr=10.0.0.2, port=4420 00:26:01.770 [2024-11-19 10:53:40.741638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1641cb0 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.741700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1641cb0 (9): Bad file descriptor 00:26:01.770 [2024-11-19 10:53:40.741764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:01.770 [2024-11-19 10:53:40.741772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:01.770 [2024-11-19 10:53:40.741780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:01.770 [2024-11-19 10:53:40.741787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:01.770 [2024-11-19 10:53:40.747369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f960 is same with the state(6) to be set 00:26:01.770 [2024-11-19 10:53:40.747638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163e420 (9): Bad file descriptor 00:26:01.770 [2024-11-19 10:53:40.747689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abb590 (9): Bad file descriptor 00:26:01.770 [2024-11-19 10:53:40.747708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163ffc0 (9): Bad file descriptor 00:26:01.770 [2024-11-19 10:53:40.747724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16389f0 (9): Bad file descriptor 00:26:01.770 [2024-11-19 10:53:40.747816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.770 [2024-11-19 10:53:40.747827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.770 [2024-11-19 10:53:40.747839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.770 [2024-11-19 10:53:40.747850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.770 [2024-11-19 10:53:40.747860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.770 [2024-11-19 10:53:40.747868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.770 [2024-11-19 10:53:40.747877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.770 [2024-11-19 10:53:40.747885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.770 [2024-11-19 10:53:40.747894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.770 [2024-11-19 10:53:40.747902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.770 [2024-11-19 10:53:40.747912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.770 [2024-11-19 10:53:40.747919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.770 [2024-11-19 10:53:40.747929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.770 [2024-11-19 10:53:40.747937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.770 [2024-11-19 10:53:40.747946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.770 [2024-11-19 10:53:40.747954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.770 [2024-11-19 10:53:40.747964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.770 [2024-11-19 10:53:40.747971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.770 [2024-11-19 10:53:40.747981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.770 [2024-11-19 10:53:40.747989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.770 [2024-11-19 10:53:40.747999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.770 [2024-11-19 10:53:40.748006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.748480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.748488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.757174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.757208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.757225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.757234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.757243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.757252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.757261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.757270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.757280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.757287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.757297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.757305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.757315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.757323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.757333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.757341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.757350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.757359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.757368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.757376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.757387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.757394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.771 [2024-11-19 10:53:40.757404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.771 [2024-11-19 10:53:40.757412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.757423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a449d0 is same with the state(6) to be set 00:26:01.772 [2024-11-19 10:53:40.758742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:26:01.772 [2024-11-19 10:53:40.758808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1559610 (9): Bad file descriptor 00:26:01.772 [2024-11-19 10:53:40.758850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.772 [2024-11-19 10:53:40.758866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.758877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.772 [2024-11-19 10:53:40.758887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.758898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.772 [2024-11-19 10:53:40.758907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.758918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.772 [2024-11-19 10:53:40.758926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.758936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9f5d0 is same with the state(6) to be set 00:26:01.772 [2024-11-19 10:53:40.758963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.772 [2024-11-19 10:53:40.758973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.758982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.772 [2024-11-19 10:53:40.758989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.758998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.772 [2024-11-19 10:53:40.759006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.772 [2024-11-19 10:53:40.759024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a93bb0 is same with the state(6) to be set 00:26:01.772 [2024-11-19 10:53:40.759059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.772 [2024-11-19 10:53:40.759069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.772 [2024-11-19 10:53:40.759085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.772 [2024-11-19 10:53:40.759102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.772 [2024-11-19 10:53:40.759119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a626f0 is same with the state(6) to be set 00:26:01.772 [2024-11-19 10:53:40.759208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:26:01.772 [2024-11-19 10:53:40.759222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:01.772 [2024-11-19 10:53:40.759293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.772 [2024-11-19 10:53:40.759724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.772 [2024-11-19 10:53:40.759731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.759742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.759750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.759761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.759768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.759778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.759786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.759795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.759804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.759814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.759821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.759831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.759839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.759850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.759857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.759868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.759876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.759886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.759895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.759905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.759914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.759923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.759931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.759941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.759950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.759960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.759969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.759979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.759988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.759998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.773 [2024-11-19 10:53:40.760441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.773 [2024-11-19 10:53:40.760451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.760459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.760468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1846e80 is same with the state(6) to be set 00:26:01.774 [2024-11-19 10:53:40.761752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.761768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.761782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.761792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.761804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.761813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.761825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.761834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.761846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.761854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.761864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.761873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.761884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.761893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.761903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.761911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.761920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.761928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.761938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.761952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.761961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.761970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.761980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.761987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.761997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.774 [2024-11-19 10:53:40.762435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.774 [2024-11-19 10:53:40.762443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.762941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.762950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a420a0 is same with the state(6) to be set 00:26:01.775 [2024-11-19 10:53:40.764502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.764517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.764530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.764539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.764549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.764557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.764567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.764575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.764585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.764593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.764603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.764611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.764621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.764628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.764639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.764649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.775 [2024-11-19 10:53:40.764660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.775 [2024-11-19 10:53:40.764668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.764678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.764686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.764695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.764704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.764714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.764722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.764731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.764740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.764750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.764758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.764769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.764776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.764786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.764794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.764803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.764812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.764822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.764830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.764841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.764848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.764858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.764867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.764877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.764886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.764897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.764906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.764916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.764924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.764934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.764942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.764952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.764960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.764969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.764978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.764988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.764995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.776 [2024-11-19 10:53:40.765378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.776 [2024-11-19 10:53:40.765389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.765396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.765406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.765414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.765424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.765432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.765443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.765451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.765460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.765468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.765479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.765487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.765497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.765505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.765515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.765523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.765532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.765540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.765550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.765558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.765569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.765577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.765586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.765594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.765604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.765611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.765621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.765629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.765639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.765646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.765656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.765664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.765673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18823b0 is same with the state(6) to be set 00:26:01.777 [2024-11-19 10:53:40.767212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:26:01.777 [2024-11-19 10:53:40.767236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:26:01.777 [2024-11-19 10:53:40.767246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:26:01.777 [2024-11-19 10:53:40.767623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.777 [2024-11-19 10:53:40.767663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559610 with addr=10.0.0.2, port=4420 00:26:01.777 [2024-11-19 10:53:40.767675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1559610 is same with the state(6) to be set 00:26:01.777 [2024-11-19 10:53:40.768005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.777 [2024-11-19 10:53:40.768020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6d070 with addr=10.0.0.2, port=4420 00:26:01.777 [2024-11-19 10:53:40.768028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6d070 is same with the state(6) to be set 00:26:01.777 [2024-11-19 10:53:40.768428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.777 [2024-11-19 10:53:40.768468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1641cb0 with addr=10.0.0.2, port=4420 00:26:01.777 [2024-11-19 10:53:40.768480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1641cb0 is same with the state(6) to be set 00:26:01.777 [2024-11-19 10:53:40.768542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.768555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.768577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.768586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.768597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.768605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.768616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.768624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.768634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.768642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.768651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.768659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.768669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.768677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.768687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.768694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.768704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.768712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.768722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.768730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.768741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.768748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.768758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.768766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.768776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.768784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.768794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.768803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.768814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.768821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.768832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.768840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.768850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.768857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.777 [2024-11-19 10:53:40.768868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.777 [2024-11-19 10:53:40.768876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.768886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.768894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.768905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.768913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.768923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.768931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.768941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.768949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.768958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.768966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.768976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.768983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.768993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.778 [2024-11-19 10:53:40.769599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.778 [2024-11-19 10:53:40.769609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.779 [2024-11-19 10:53:40.769617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.779 [2024-11-19 10:53:40.769627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.779 [2024-11-19 10:53:40.769636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.779 [2024-11-19 10:53:40.769647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.779 [2024-11-19 10:53:40.769655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.779 [2024-11-19 10:53:40.769664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.779 [2024-11-19 10:53:40.769672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.779 [2024-11-19 10:53:40.769682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.779 [2024-11-19 10:53:40.769690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.779 [2024-11-19 10:53:40.769700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.779 [2024-11-19 10:53:40.769708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.779 [2024-11-19 10:53:40.769717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.779 [2024-11-19 10:53:40.769728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.779 [2024-11-19 10:53:40.769736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1845b60 is same with the state(6) to be set 00:26:01.779 task offset: 24576 on job bdev=Nvme1n1 fails 00:26:01.779 00:26:01.779 Latency(us) 00:26:01.779 [2024-11-19T09:53:40.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.779 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.779 Job: Nvme1n1 ended in about 1.04 seconds with error 00:26:01.779 Verification LBA range: start 0x0 length 0x400 00:26:01.779 Nvme1n1 : 1.04 184.28 11.52 61.43 0.00 257778.93 2471.25 272629.76 00:26:01.779 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.779 Job: Nvme2n1 ended in about 1.08 seconds with error 00:26:01.779 Verification LBA range: start 0x0 length 0x400 00:26:01.779 Nvme2n1 : 1.08 177.25 11.08 59.08 0.00 263296.43 23374.51 232434.35 00:26:01.779 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.779 Job: Nvme3n1 ended in about 1.07 seconds with error 00:26:01.779 Verification LBA range: start 0x0 length 0x400 00:26:01.779 Nvme3n1 : 1.07 182.51 11.41 59.59 0.00 252107.75 16384.00 248162.99 00:26:01.779 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.779 Job: Nvme4n1 ended in about 1.08 seconds with error 00:26:01.779 Verification LBA range: start 0x0 length 0x400 00:26:01.779 Nvme4n1 : 1.08 178.37 11.15 59.46 0.00 251886.29 20534.61 225443.84 00:26:01.779 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.779 Job: Nvme5n1 ended in about 1.05 seconds with error 00:26:01.779 Verification LBA range: start 0x0 length 0x400 00:26:01.779 Nvme5n1 : 1.05 182.56 11.41 60.85 0.00 240841.44 6062.08 276125.01 00:26:01.779 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.779 Job: Nvme6n1 ended in about 1.07 seconds with error 00:26:01.779 Verification LBA range: start 0x0 length 0x400 00:26:01.779 Nvme6n1 : 1.07 192.35 12.02 46.69 0.00 239973.12 37355.52 248162.99 00:26:01.779 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.779 Verification LBA range: start 0x0 length 0x400 00:26:01.779 Nvme7n1 : 1.06 242.03 15.13 0.00 0.00 232674.13 39321.60 248162.99 00:26:01.779 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.779 Verification LBA range: start 0x0 length 0x400 00:26:01.779 Nvme8n1 : 1.05 244.36 15.27 0.00 0.00 225108.48 19660.80 237677.23 00:26:01.779 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.779 Verification LBA range: start 0x0 length 0x400 00:26:01.779 Nvme9n1 : 1.05 183.11 11.44 0.00 0.00 294009.17 18786.99 274377.39 00:26:01.779 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.779 Job: Nvme10n1 ended in about 1.08 seconds with error 00:26:01.779 Verification LBA range: start 0x0 length 0x400 00:26:01.779 Nvme10n1 : 1.08 177.92 11.12 59.31 0.00 223623.47 19223.89 225443.84 00:26:01.779 [2024-11-19T09:53:40.974Z] =================================================================================================================== 00:26:01.779 [2024-11-19T09:53:40.974Z] Total : 1944.73 121.55 406.41 0.00 246961.78 2471.25 276125.01 00:26:01.779 [2024-11-19 10:53:40.799128] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:01.779 [2024-11-19 10:53:40.799172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:26:01.779 [2024-11-19 10:53:40.799584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.779 [2024-11-19 10:53:40.799603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x163ffc0 with addr=10.0.0.2, port=4420 00:26:01.779 [2024-11-19 10:53:40.799614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163ffc0 is same with the state(6) to be set 00:26:01.779 [2024-11-19 10:53:40.799939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.779 [2024-11-19 10:53:40.799951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x163e420 with addr=10.0.0.2, port=4420 00:26:01.779 [2024-11-19 10:53:40.799959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163e420 is same with the state(6) to be set 00:26:01.779 [2024-11-19 10:53:40.800226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.779 [2024-11-19 10:53:40.800239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abb590 with addr=10.0.0.2, port=4420 00:26:01.779 [2024-11-19 10:53:40.800247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abb590 is same with the state(6) to be set 00:26:01.779 [2024-11-19 10:53:40.800261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1559610 (9): Bad file descriptor 00:26:01.779 [2024-11-19 10:53:40.800274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6d070 (9): Bad file descriptor 00:26:01.779 [2024-11-19 10:53:40.800284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1641cb0 (9): Bad file descriptor 00:26:01.779 [2024-11-19 10:53:40.800303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9f5d0 (9): Bad file descriptor 00:26:01.779 [2024-11-19 10:53:40.800327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a93bb0 (9): Bad file descriptor 00:26:01.779 [2024-11-19 10:53:40.800347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a626f0 (9): Bad file descriptor 00:26:01.779 [2024-11-19 10:53:40.800372] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:26:01.779 [2024-11-19 10:53:40.800385] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:26:01.779 [2024-11-19 10:53:40.800395] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:26:01.779 [2024-11-19 10:53:40.801606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.779 [2024-11-19 10:53:40.801625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16389f0 with addr=10.0.0.2, port=4420 00:26:01.779 [2024-11-19 10:53:40.801633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16389f0 is same with the state(6) to be set 00:26:01.779 [2024-11-19 10:53:40.801643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163ffc0 (9): Bad file descriptor 00:26:01.779 [2024-11-19 10:53:40.801653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163e420 (9): Bad file descriptor 00:26:01.779 [2024-11-19 10:53:40.801663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abb590 (9): Bad file descriptor 00:26:01.779 [2024-11-19 10:53:40.801672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:26:01.779 [2024-11-19 10:53:40.801680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:26:01.779 [2024-11-19 10:53:40.801689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:01.779 [2024-11-19 10:53:40.801697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:26:01.779 [2024-11-19 10:53:40.801706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:26:01.779 [2024-11-19 10:53:40.801713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:26:01.779 [2024-11-19 10:53:40.801720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:01.779 [2024-11-19 10:53:40.801730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:26:01.780 [2024-11-19 10:53:40.801738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:01.780 [2024-11-19 10:53:40.801745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:01.780 [2024-11-19 10:53:40.801752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:01.780 [2024-11-19 10:53:40.801759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:01.780 [2024-11-19 10:53:40.801782] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:26:01.780 [2024-11-19 10:53:40.801793] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:26:01.780 [2024-11-19 10:53:40.801804] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:26:01.780 [2024-11-19 10:53:40.802181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16389f0 (9): Bad file descriptor 00:26:01.780 [2024-11-19 10:53:40.802195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:26:01.780 [2024-11-19 10:53:40.802203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:26:01.780 [2024-11-19 10:53:40.802211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:01.780 [2024-11-19 10:53:40.802217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:26:01.780 [2024-11-19 10:53:40.802225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:26:01.780 [2024-11-19 10:53:40.802232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:26:01.780 [2024-11-19 10:53:40.802240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:01.780 [2024-11-19 10:53:40.802246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:26:01.780 [2024-11-19 10:53:40.802253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:26:01.780 [2024-11-19 10:53:40.802260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:26:01.780 [2024-11-19 10:53:40.802268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:01.780 [2024-11-19 10:53:40.802275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:26:01.780 [2024-11-19 10:53:40.802337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:26:01.780 [2024-11-19 10:53:40.802349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:26:01.780 [2024-11-19 10:53:40.802358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:26:01.780 [2024-11-19 10:53:40.802367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:01.780 [2024-11-19 10:53:40.802377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:26:01.780 [2024-11-19 10:53:40.802385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:26:01.780 [2024-11-19 10:53:40.802431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:26:01.780 [2024-11-19 10:53:40.802443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:26:01.780 [2024-11-19 10:53:40.802450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:01.780 [2024-11-19 10:53:40.802457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:26:01.780 [2024-11-19 10:53:40.802799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.780 [2024-11-19 10:53:40.802813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9f5d0 with addr=10.0.0.2, port=4420 00:26:01.780 [2024-11-19 10:53:40.802821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9f5d0 is same with the state(6) to be set 00:26:01.780 [2024-11-19 10:53:40.803108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.780 [2024-11-19 10:53:40.803119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a93bb0 with addr=10.0.0.2, port=4420 00:26:01.780 [2024-11-19 10:53:40.803126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a93bb0 is same with the state(6) to be set 00:26:01.780 [2024-11-19 10:53:40.803354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.780 [2024-11-19 10:53:40.803365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a626f0 with addr=10.0.0.2, port=4420 00:26:01.780 [2024-11-19 10:53:40.803373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a626f0 is same with the state(6) to be set 00:26:01.780 [2024-11-19 10:53:40.803710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.780 [2024-11-19 10:53:40.803722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1641cb0 with addr=10.0.0.2, port=4420 00:26:01.780 [2024-11-19 10:53:40.803729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1641cb0 is same with the state(6) to be set 00:26:01.780 [2024-11-19 10:53:40.804080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.780 [2024-11-19 10:53:40.804091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6d070 with addr=10.0.0.2, port=4420 00:26:01.780 [2024-11-19 10:53:40.804098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6d070 is same with the state(6) to be set 00:26:01.780 [2024-11-19 10:53:40.804424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.780 [2024-11-19 10:53:40.804436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1559610 with addr=10.0.0.2, port=4420 00:26:01.780 [2024-11-19 10:53:40.804443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1559610 is same with the state(6) to be set 00:26:01.780 [2024-11-19 10:53:40.804474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9f5d0 (9): Bad file descriptor 00:26:01.780 [2024-11-19 10:53:40.804484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a93bb0 (9): Bad file descriptor 00:26:01.780 [2024-11-19 10:53:40.804494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a626f0 (9): Bad file descriptor 00:26:01.780 [2024-11-19 10:53:40.804503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1641cb0 (9): Bad file descriptor 00:26:01.780 [2024-11-19 10:53:40.804513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6d070 (9): Bad file descriptor 00:26:01.780 [2024-11-19 10:53:40.804522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1559610 (9): Bad file descriptor 00:26:01.780 [2024-11-19 10:53:40.804551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:26:01.780 [2024-11-19 10:53:40.804559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:26:01.780 [2024-11-19 10:53:40.804566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:26:01.780 [2024-11-19 10:53:40.804576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:26:01.780 [2024-11-19 10:53:40.804584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:26:01.780 [2024-11-19 10:53:40.804590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:26:01.780 [2024-11-19 10:53:40.804597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:01.780 [2024-11-19 10:53:40.804603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:26:01.780 [2024-11-19 10:53:40.804611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:26:01.780 [2024-11-19 10:53:40.804617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:26:01.780 [2024-11-19 10:53:40.804624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:01.780 [2024-11-19 10:53:40.804631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:26:01.780 [2024-11-19 10:53:40.804639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:01.780 [2024-11-19 10:53:40.804646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:01.780 [2024-11-19 10:53:40.804653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:01.780 [2024-11-19 10:53:40.804659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:01.780 [2024-11-19 10:53:40.804666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:26:01.780 [2024-11-19 10:53:40.804673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:26:01.780 [2024-11-19 10:53:40.804680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:01.780 [2024-11-19 10:53:40.804687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:26:01.780 [2024-11-19 10:53:40.804695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:26:01.780 [2024-11-19 10:53:40.804701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:26:01.780 [2024-11-19 10:53:40.804708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:01.780 [2024-11-19 10:53:40.804714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:26:02.041 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:26:02.984 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1091507 00:26:02.984 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:26:02.984 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1091507 00:26:02.984 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:26:02.984 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:02.984 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:26:02.984 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:02.984 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1091507 00:26:02.984 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:26:02.984 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:02.984 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:26:02.984 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:26:02.984 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:26:02.984 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:02.984 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:26:02.984 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:02.984 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:02.984 rmmod nvme_tcp 00:26:02.984 rmmod nvme_fabrics 00:26:02.984 rmmod nvme_keyring 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1091121 ']' 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1091121 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1091121 ']' 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1091121 00:26:02.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1091121) - No such process 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1091121 is not found' 00:26:02.984 Process with pid 1091121 is not found 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:26:02.984 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:26:02.985 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:02.985 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:26:02.985 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:02.985 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:02.985 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.985 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.985 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.536 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:05.537 00:26:05.537 real 0m8.266s 00:26:05.537 user 0m21.560s 00:26:05.537 sys 0m1.279s 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:05.537 ************************************ 00:26:05.537 END TEST nvmf_shutdown_tc3 00:26:05.537 ************************************ 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:05.537 ************************************ 00:26:05.537 START TEST nvmf_shutdown_tc4 00:26:05.537 ************************************ 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:05.537 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:05.537 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:05.537 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.537 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:05.537 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:05.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:05.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:26:05.538 00:26:05.538 --- 10.0.0.2 ping statistics --- 00:26:05.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.538 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:05.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:05.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:26:05.538 00:26:05.538 --- 10.0.0.1 ping statistics --- 00:26:05.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.538 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1092961 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1092961 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1092961 ']' 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:05.538 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:05.538 [2024-11-19 10:53:44.707547] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:26:05.538 [2024-11-19 10:53:44.707614] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.799 [2024-11-19 10:53:44.804010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:05.799 [2024-11-19 10:53:44.837873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:05.799 [2024-11-19 10:53:44.837907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:05.799 [2024-11-19 10:53:44.837913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:05.799 [2024-11-19 10:53:44.837922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:05.799 [2024-11-19 10:53:44.837926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:05.799 [2024-11-19 10:53:44.839262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:05.799 [2024-11-19 10:53:44.839421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:05.799 [2024-11-19 10:53:44.839571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.799 [2024-11-19 10:53:44.839573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:06.372 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.373 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:26:06.373 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:06.373 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:06.373 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:06.373 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.373 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:06.373 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.373 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:06.373 [2024-11-19 10:53:45.558008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.373 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.373 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:06.373 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:06.373 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:06.373 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.634 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:06.634 Malloc1 00:26:06.634 [2024-11-19 10:53:45.669070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.634 Malloc2 00:26:06.634 Malloc3 00:26:06.634 Malloc4 00:26:06.634 Malloc5 00:26:06.895 Malloc6 00:26:06.895 Malloc7 00:26:06.895 Malloc8 00:26:06.895 Malloc9 00:26:06.895 Malloc10 00:26:06.895 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.896 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:06.896 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:06.896 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:06.896 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1093201 00:26:06.896 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:26:06.896 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:26:07.156 [2024-11-19 10:53:46.148781] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:12.453 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:12.453 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1092961 00:26:12.453 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1092961 ']' 00:26:12.453 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1092961 00:26:12.453 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:26:12.453 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:12.453 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1092961 00:26:12.453 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:12.453 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:12.453 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1092961' 00:26:12.453 killing process with pid 1092961 00:26:12.453 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1092961 00:26:12.453 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1092961 00:26:12.453 [2024-11-19 10:53:51.153870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2117990 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.153914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2117990 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.153920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2117990 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.153926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2117990 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.153931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2117990 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.153936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2117990 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.153941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2117990 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.153945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2117990 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.153950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2117990 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.154151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2117e60 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.154186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2117e60 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.154194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2117e60 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.154199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2117e60 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.154205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2117e60 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.154466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118330 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.154490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118330 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.154497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118330 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.154505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118330 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.154515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118330 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.154520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118330 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.154526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118330 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.154868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21174c0 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.154892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21174c0 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.154898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21174c0 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.154903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21174c0 is same with the state(6) to be set 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 [2024-11-19 10:53:51.156661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2116ff0 is same with Write completed with error (sct=0, sc=8) 00:26:12.453 the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.156679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2116ff0 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.156685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2116ff0 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.156690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2116ff0 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.156695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2116ff0 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.156700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2116ff0 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.156705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2116ff0 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.156710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2116ff0 is same with the state(6) to be set 00:26:12.453 [2024-11-19 10:53:51.156708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.453 starting I/O failed: -6 00:26:12.453 Write completed with error (sct=0, sc=8) 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 [2024-11-19 10:53:51.157590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 [2024-11-19 10:53:51.158003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118cd0 is same with the state(6) to be set 00:26:12.454 starting I/O failed: -6 00:26:12.454 [2024-11-19 10:53:51.158019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118cd0 is same with the state(6) to be set 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 [2024-11-19 10:53:51.158025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118cd0 is same with the state(6) to be set 00:26:12.454 [2024-11-19 10:53:51.158030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118cd0 is same with the state(6) to be set 00:26:12.454 [2024-11-19 10:53:51.158035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118cd0 is same with the state(6) to be set 00:26:12.454 [2024-11-19 10:53:51.158041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118cd0 is same with the state(6) to be set 00:26:12.454 [2024-11-19 10:53:51.158046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118cd0 is same with the state(6) to be set 00:26:12.454 [2024-11-19 10:53:51.158050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118cd0 is same with the state(6) to be set 00:26:12.454 [2024-11-19 10:53:51.158056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118cd0 is same with the state(6) to be set 00:26:12.454 [2024-11-19 10:53:51.158060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118cd0 is same with the state(6) to be set 00:26:12.454 [2024-11-19 10:53:51.158065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118cd0 is same with the state(6) to be set 00:26:12.454 [2024-11-19 10:53:51.158070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118cd0 is same with the state(6) to be set 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 [2024-11-19 10:53:51.158238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21191a0 is same with the state(6) to be set 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 [2024-11-19 10:53:51.158255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21191a0 is same with starting I/O failed: -6 00:26:12.454 the state(6) to be set 00:26:12.454 [2024-11-19 10:53:51.158262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21191a0 is same with the state(6) to be set 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 [2024-11-19 10:53:51.158267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21191a0 is same with the state(6) to be set 00:26:12.454 starting I/O failed: -6 00:26:12.454 [2024-11-19 10:53:51.158273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21191a0 is same with the state(6) to be set 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 [2024-11-19 10:53:51.158466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119670 is same with the state(6) to be set 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 [2024-11-19 10:53:51.158479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119670 is same with the state(6) to be set 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.454 [2024-11-19 10:53:51.158925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118800 is same with the state(6) to be set 00:26:12.454 [2024-11-19 10:53:51.158941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118800 is same with the state(6) to be set 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 [2024-11-19 10:53:51.158947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118800 is same with the state(6) to be set 00:26:12.454 starting I/O failed: -6 00:26:12.454 [2024-11-19 10:53:51.158952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118800 is same with the state(6) to be set 00:26:12.454 [2024-11-19 10:53:51.158958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118800 is same with the state(6) to be set 00:26:12.454 Write completed with error (sct=0, sc=8) 00:26:12.454 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 [2024-11-19 10:53:51.159973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.455 NVMe io qpair process completion error 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 [2024-11-19 10:53:51.161276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 [2024-11-19 10:53:51.162094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 starting I/O failed: -6 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.455 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 [2024-11-19 10:53:51.163018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 [2024-11-19 10:53:51.164486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.456 NVMe io qpair process completion error 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 starting I/O failed: -6 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 Write completed with error (sct=0, sc=8) 00:26:12.456 [2024-11-19 10:53:51.165827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 [2024-11-19 10:53:51.166645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.457 starting I/O failed: -6 00:26:12.457 starting I/O failed: -6 00:26:12.457 starting I/O failed: -6 00:26:12.457 starting I/O failed: -6 00:26:12.457 starting I/O failed: -6 00:26:12.457 starting I/O failed: -6 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 [2024-11-19 10:53:51.168005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.457 starting I/O failed: -6 00:26:12.457 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 [2024-11-19 10:53:51.170634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.458 NVMe io qpair process completion error 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 [2024-11-19 10:53:51.171857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 [2024-11-19 10:53:51.172688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 starting I/O failed: -6 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.458 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 [2024-11-19 10:53:51.173653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 [2024-11-19 10:53:51.175333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.459 NVMe io qpair process completion error 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 starting I/O failed: -6 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.459 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 [2024-11-19 10:53:51.176901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 [2024-11-19 10:53:51.178350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.460 starting I/O failed: -6 00:26:12.460 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 [2024-11-19 10:53:51.180878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.461 NVMe io qpair process completion error 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 [2024-11-19 10:53:51.182012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 [2024-11-19 10:53:51.182830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 starting I/O failed: -6 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.461 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 [2024-11-19 10:53:51.183765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 [2024-11-19 10:53:51.185397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.462 NVMe io qpair process completion error 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 starting I/O failed: -6 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.462 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 [2024-11-19 10:53:51.186625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.463 starting I/O failed: -6 00:26:12.463 starting I/O failed: -6 00:26:12.463 starting I/O failed: -6 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 [2024-11-19 10:53:51.187615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 [2024-11-19 10:53:51.188554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.463 Write completed with error (sct=0, sc=8) 00:26:12.463 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 [2024-11-19 10:53:51.191559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.464 NVMe io qpair process completion error 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 [2024-11-19 10:53:51.192584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.464 starting I/O failed: -6 00:26:12.464 starting I/O failed: -6 00:26:12.464 starting I/O failed: -6 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 [2024-11-19 10:53:51.193575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.464 starting I/O failed: -6 00:26:12.464 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 [2024-11-19 10:53:51.194616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 [2024-11-19 10:53:51.196593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.465 NVMe io qpair process completion error 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 starting I/O failed: -6 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.465 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 [2024-11-19 10:53:51.197688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 [2024-11-19 10:53:51.198581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 [2024-11-19 10:53:51.199509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.466 Write completed with error (sct=0, sc=8) 00:26:12.466 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 [2024-11-19 10:53:51.201182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.467 NVMe io qpair process completion error 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 [2024-11-19 10:53:51.202238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.467 starting I/O failed: -6 00:26:12.467 [2024-11-19 10:53:51.203047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.467 Write completed with error (sct=0, sc=8) 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 [2024-11-19 10:53:51.203998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 Write completed with error (sct=0, sc=8) 00:26:12.468 starting I/O failed: -6 00:26:12.468 [2024-11-19 10:53:51.207560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.468 NVMe io qpair process completion error 00:26:12.468 Initializing NVMe Controllers 00:26:12.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:26:12.469 Controller IO queue size 128, less than required. 00:26:12.469 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:26:12.469 Controller IO queue size 128, less than required. 00:26:12.469 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:26:12.469 Controller IO queue size 128, less than required. 00:26:12.469 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:26:12.469 Controller IO queue size 128, less than required. 00:26:12.469 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:26:12.469 Controller IO queue size 128, less than required. 00:26:12.469 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:26:12.469 Controller IO queue size 128, less than required. 00:26:12.469 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:26:12.469 Controller IO queue size 128, less than required. 00:26:12.469 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:26:12.469 Controller IO queue size 128, less than required. 00:26:12.469 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:26:12.469 Controller IO queue size 128, less than required. 00:26:12.469 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:12.469 Controller IO queue size 128, less than required. 00:26:12.469 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:26:12.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:26:12.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:26:12.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:26:12.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:26:12.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:26:12.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:26:12.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:26:12.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:26:12.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:12.469 Initialization complete. Launching workers. 00:26:12.469 ======================================================== 00:26:12.469 Latency(us) 00:26:12.469 Device Information : IOPS MiB/s Average min max 00:26:12.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1886.57 81.06 67866.27 852.40 119855.38 00:26:12.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1871.65 80.42 68434.77 659.15 123900.76 00:26:12.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1893.50 81.36 67681.78 684.90 124350.00 00:26:12.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1886.57 81.06 67955.81 457.88 118997.09 00:26:12.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1904.64 81.84 67341.40 835.47 128266.53 00:26:12.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1818.48 78.14 70557.15 561.25 130019.57 00:26:12.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1855.89 79.75 69177.55 693.33 123282.04 00:26:12.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1875.64 80.59 68474.85 892.40 135386.38 00:26:12.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1870.39 80.37 68690.03 706.00 124686.33 00:26:12.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1840.13 79.07 69108.98 598.01 122888.56 00:26:12.469 ======================================================== 00:26:12.469 Total : 18703.46 803.66 68517.48 457.88 135386.38 00:26:12.469 00:26:12.469 [2024-11-19 10:53:51.210860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6560 is same with the state(6) to be set 00:26:12.469 [2024-11-19 10:53:51.210904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6ef0 is same with the state(6) to be set 00:26:12.469 [2024-11-19 10:53:51.210934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef8900 is same with the state(6) to be set 00:26:12.469 [2024-11-19 10:53:51.210964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef8ae0 is same with the state(6) to be set 00:26:12.469 [2024-11-19 10:53:51.210992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6890 is same with the state(6) to be set 00:26:12.469 [2024-11-19 10:53:51.211023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7740 is same with the state(6) to be set 00:26:12.469 [2024-11-19 10:53:51.211051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7410 is same with the state(6) to be set 00:26:12.469 [2024-11-19 10:53:51.211080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6bc0 is same with the state(6) to be set 00:26:12.469 [2024-11-19 10:53:51.211109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7a70 is same with the state(6) to be set 00:26:12.469 [2024-11-19 10:53:51.211138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef8720 is same with the state(6) to be set 00:26:12.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:12.469 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1093201 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1093201 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1093201 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:13.411 rmmod nvme_tcp 00:26:13.411 rmmod nvme_fabrics 00:26:13.411 rmmod nvme_keyring 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1092961 ']' 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1092961 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1092961 ']' 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1092961 00:26:13.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1092961) - No such process 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1092961 is not found' 00:26:13.411 Process with pid 1092961 is not found 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.411 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:15.960 00:26:15.960 real 0m10.303s 00:26:15.960 user 0m28.058s 00:26:15.960 sys 0m4.016s 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:15.960 ************************************ 00:26:15.960 END TEST nvmf_shutdown_tc4 00:26:15.960 ************************************ 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:26:15.960 00:26:15.960 real 0m43.310s 00:26:15.960 user 1m45.367s 00:26:15.960 sys 0m13.838s 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:15.960 ************************************ 00:26:15.960 END TEST nvmf_shutdown 00:26:15.960 ************************************ 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:15.960 ************************************ 00:26:15.960 START TEST nvmf_nsid 00:26:15.960 ************************************ 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:15.960 * Looking for test storage... 00:26:15.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:15.960 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:15.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.961 --rc genhtml_branch_coverage=1 00:26:15.961 --rc genhtml_function_coverage=1 00:26:15.961 --rc genhtml_legend=1 00:26:15.961 --rc geninfo_all_blocks=1 00:26:15.961 --rc geninfo_unexecuted_blocks=1 00:26:15.961 00:26:15.961 ' 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:15.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.961 --rc genhtml_branch_coverage=1 00:26:15.961 --rc genhtml_function_coverage=1 00:26:15.961 --rc genhtml_legend=1 00:26:15.961 --rc geninfo_all_blocks=1 00:26:15.961 --rc geninfo_unexecuted_blocks=1 00:26:15.961 00:26:15.961 ' 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:15.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.961 --rc genhtml_branch_coverage=1 00:26:15.961 --rc genhtml_function_coverage=1 00:26:15.961 --rc genhtml_legend=1 00:26:15.961 --rc geninfo_all_blocks=1 00:26:15.961 --rc geninfo_unexecuted_blocks=1 00:26:15.961 00:26:15.961 ' 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:15.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.961 --rc genhtml_branch_coverage=1 00:26:15.961 --rc genhtml_function_coverage=1 00:26:15.961 --rc genhtml_legend=1 00:26:15.961 --rc geninfo_all_blocks=1 00:26:15.961 --rc geninfo_unexecuted_blocks=1 00:26:15.961 00:26:15.961 ' 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:15.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.961 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:15.962 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:15.962 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:15.962 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.962 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.962 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.962 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:15.962 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:15.962 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:26:15.962 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:24.204 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:24.204 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:24.204 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:24.204 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:24.204 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:24.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:26:24.205 00:26:24.205 --- 10.0.0.2 ping statistics --- 00:26:24.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.205 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:24.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:26:24.205 00:26:24.205 --- 10.0.0.1 ping statistics --- 00:26:24.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.205 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1098788 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1098788 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1098788 ']' 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.205 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:24.205 [2024-11-19 10:54:02.517751] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:26:24.205 [2024-11-19 10:54:02.517817] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.205 [2024-11-19 10:54:02.617329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.205 [2024-11-19 10:54:02.668581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.205 [2024-11-19 10:54:02.668632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.205 [2024-11-19 10:54:02.668641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.205 [2024-11-19 10:54:02.668648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.205 [2024-11-19 10:54:02.668654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.205 [2024-11-19 10:54:02.669421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1098834 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=3a50a080-46f3-4583-b009-d5e48936b84e 00:26:24.205 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:26:24.467 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=fb8350ef-08e2-4816-bf54-856a4467e562 00:26:24.467 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:26:24.467 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=fcb8685c-6c00-43a3-9375-1ecc16eaf9c7 00:26:24.467 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:26:24.467 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.467 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:24.467 null0 00:26:24.467 null1 00:26:24.467 null2 00:26:24.467 [2024-11-19 10:54:03.440438] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:26:24.467 [2024-11-19 10:54:03.440507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1098834 ] 00:26:24.467 [2024-11-19 10:54:03.443680] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.467 [2024-11-19 10:54:03.467914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.467 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.467 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1098834 /var/tmp/tgt2.sock 00:26:24.467 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1098834 ']' 00:26:24.467 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:26:24.467 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.467 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:26:24.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:26:24.467 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.467 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:24.467 [2024-11-19 10:54:03.533059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.467 [2024-11-19 10:54:03.586472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.729 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.729 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:26:24.729 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:26:24.990 [2024-11-19 10:54:04.152456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.990 [2024-11-19 10:54:04.168635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:26:25.253 nvme0n1 nvme0n2 00:26:25.253 nvme1n1 00:26:25.253 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:26:25.253 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:26:25.253 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:26.641 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:26:26.641 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:26:26.641 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:26:26.641 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:26:26.641 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:26:26.641 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:26:26.641 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:26:26.641 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:26.641 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:26.641 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:26:26.641 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:26:26.641 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:26:26.641 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:26:27.583 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:27.583 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:26:27.583 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:27.583 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:26:27.583 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:27.583 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 3a50a080-46f3-4583-b009-d5e48936b84e 00:26:27.583 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:27.583 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:26:27.583 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:26:27.583 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:26:27.583 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:27.583 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3a50a08046f34583b009d5e48936b84e 00:26:27.583 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3A50A08046F34583B009D5E48936B84E 00:26:27.583 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 3A50A08046F34583B009D5E48936B84E == \3\A\5\0\A\0\8\0\4\6\F\3\4\5\8\3\B\0\0\9\D\5\E\4\8\9\3\6\B\8\4\E ]] 00:26:27.583 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:26:27.583 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:27.583 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:27.583 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:26:27.583 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:27.584 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:26:27.584 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:27.584 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid fb8350ef-08e2-4816-bf54-856a4467e562 00:26:27.584 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fb8350ef08e24816bf54856a4467e562 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FB8350EF08E24816BF54856A4467E562 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ FB8350EF08E24816BF54856A4467E562 == \F\B\8\3\5\0\E\F\0\8\E\2\4\8\1\6\B\F\5\4\8\5\6\A\4\4\6\7\E\5\6\2 ]] 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid fcb8685c-6c00-43a3-9375-1ecc16eaf9c7 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fcb8685c6c0043a393751ecc16eaf9c7 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FCB8685C6C0043A393751ECC16EAF9C7 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ FCB8685C6C0043A393751ECC16EAF9C7 == \F\C\B\8\6\8\5\C\6\C\0\0\4\3\A\3\9\3\7\5\1\E\C\C\1\6\E\A\F\9\C\7 ]] 00:26:27.844 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:26:28.105 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:26:28.105 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:26:28.105 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1098834 00:26:28.105 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1098834 ']' 00:26:28.105 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1098834 00:26:28.105 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:26:28.105 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.105 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1098834 00:26:28.105 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:28.105 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:28.105 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1098834' 00:26:28.105 killing process with pid 1098834 00:26:28.105 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1098834 00:26:28.105 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1098834 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:28.366 rmmod nvme_tcp 00:26:28.366 rmmod nvme_fabrics 00:26:28.366 rmmod nvme_keyring 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1098788 ']' 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1098788 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1098788 ']' 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1098788 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1098788 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1098788' 00:26:28.366 killing process with pid 1098788 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1098788 00:26:28.366 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1098788 00:26:28.627 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:28.627 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:28.627 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:28.627 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:26:28.627 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:26:28.627 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:28.627 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:26:28.627 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:28.627 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:28.627 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.627 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.627 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.539 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:30.539 00:26:30.539 real 0m14.999s 00:26:30.539 user 0m11.408s 00:26:30.539 sys 0m6.926s 00:26:30.539 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:30.539 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:30.539 ************************************ 00:26:30.539 END TEST nvmf_nsid 00:26:30.539 ************************************ 00:26:30.800 10:54:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:30.800 00:26:30.800 real 13m8.470s 00:26:30.800 user 27m34.208s 00:26:30.800 sys 3m57.243s 00:26:30.800 10:54:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:30.800 10:54:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:30.800 ************************************ 00:26:30.800 END TEST nvmf_target_extra 00:26:30.800 ************************************ 00:26:30.800 10:54:09 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:30.800 10:54:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:30.800 10:54:09 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:30.800 10:54:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:30.800 ************************************ 00:26:30.800 START TEST nvmf_host 00:26:30.801 ************************************ 00:26:30.801 10:54:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:30.801 * Looking for test storage... 00:26:30.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:30.801 10:54:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:30.801 10:54:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:26:30.801 10:54:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:31.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.063 --rc genhtml_branch_coverage=1 00:26:31.063 --rc genhtml_function_coverage=1 00:26:31.063 --rc genhtml_legend=1 00:26:31.063 --rc geninfo_all_blocks=1 00:26:31.063 --rc geninfo_unexecuted_blocks=1 00:26:31.063 00:26:31.063 ' 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:31.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.063 --rc genhtml_branch_coverage=1 00:26:31.063 --rc genhtml_function_coverage=1 00:26:31.063 --rc genhtml_legend=1 00:26:31.063 --rc geninfo_all_blocks=1 00:26:31.063 --rc geninfo_unexecuted_blocks=1 00:26:31.063 00:26:31.063 ' 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:31.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.063 --rc genhtml_branch_coverage=1 00:26:31.063 --rc genhtml_function_coverage=1 00:26:31.063 --rc genhtml_legend=1 00:26:31.063 --rc geninfo_all_blocks=1 00:26:31.063 --rc geninfo_unexecuted_blocks=1 00:26:31.063 00:26:31.063 ' 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:31.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.063 --rc genhtml_branch_coverage=1 00:26:31.063 --rc genhtml_function_coverage=1 00:26:31.063 --rc genhtml_legend=1 00:26:31.063 --rc geninfo_all_blocks=1 00:26:31.063 --rc geninfo_unexecuted_blocks=1 00:26:31.063 00:26:31.063 ' 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:31.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:31.063 10:54:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:31.064 10:54:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:31.064 10:54:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.064 ************************************ 00:26:31.064 START TEST nvmf_multicontroller 00:26:31.064 ************************************ 00:26:31.064 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:31.064 * Looking for test storage... 00:26:31.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:31.064 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:31.064 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:26:31.064 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:31.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.324 --rc genhtml_branch_coverage=1 00:26:31.324 --rc genhtml_function_coverage=1 00:26:31.324 --rc genhtml_legend=1 00:26:31.324 --rc geninfo_all_blocks=1 00:26:31.324 --rc geninfo_unexecuted_blocks=1 00:26:31.324 00:26:31.324 ' 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:31.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.324 --rc genhtml_branch_coverage=1 00:26:31.324 --rc genhtml_function_coverage=1 00:26:31.324 --rc genhtml_legend=1 00:26:31.324 --rc geninfo_all_blocks=1 00:26:31.324 --rc geninfo_unexecuted_blocks=1 00:26:31.324 00:26:31.324 ' 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:31.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.324 --rc genhtml_branch_coverage=1 00:26:31.324 --rc genhtml_function_coverage=1 00:26:31.324 --rc genhtml_legend=1 00:26:31.324 --rc geninfo_all_blocks=1 00:26:31.324 --rc geninfo_unexecuted_blocks=1 00:26:31.324 00:26:31.324 ' 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:31.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.324 --rc genhtml_branch_coverage=1 00:26:31.324 --rc genhtml_function_coverage=1 00:26:31.324 --rc genhtml_legend=1 00:26:31.324 --rc geninfo_all_blocks=1 00:26:31.324 --rc geninfo_unexecuted_blocks=1 00:26:31.324 00:26:31.324 ' 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:31.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:31.324 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:31.325 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:31.325 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.325 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.325 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.325 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:31.325 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:31.325 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:26:31.325 10:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:39.463 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:39.463 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:39.463 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:39.463 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:39.463 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:39.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:26:39.464 00:26:39.464 --- 10.0.0.2 ping statistics --- 00:26:39.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.464 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:39.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:26:39.464 00:26:39.464 --- 10.0.0.1 ping statistics --- 00:26:39.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.464 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1104401 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1104401 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1104401 ']' 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.464 10:54:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.464 [2024-11-19 10:54:17.970494] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:26:39.464 [2024-11-19 10:54:17.970562] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.464 [2024-11-19 10:54:18.072958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:39.464 [2024-11-19 10:54:18.125802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.464 [2024-11-19 10:54:18.125852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.464 [2024-11-19 10:54:18.125861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.464 [2024-11-19 10:54:18.125869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.464 [2024-11-19 10:54:18.125875] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.464 [2024-11-19 10:54:18.127710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:39.464 [2024-11-19 10:54:18.127870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.464 [2024-11-19 10:54:18.127870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.725 [2024-11-19 10:54:18.851209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.725 Malloc0 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.725 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.986 [2024-11-19 10:54:18.923657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.986 [2024-11-19 10:54:18.935552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.986 Malloc1 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.986 10:54:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:39.986 10:54:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.986 10:54:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.986 10:54:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.986 10:54:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1104751 00:26:39.986 10:54:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:39.986 10:54:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:39.986 10:54:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1104751 /var/tmp/bdevperf.sock 00:26:39.986 10:54:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1104751 ']' 00:26:39.986 10:54:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:39.986 10:54:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.986 10:54:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:39.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:39.986 10:54:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.986 10:54:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:40.926 10:54:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.926 10:54:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:26:40.926 10:54:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:26:40.926 10:54:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.926 10:54:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.186 NVMe0n1 00:26:41.186 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.186 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:41.186 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:41.186 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.186 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.186 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.186 1 00:26:41.186 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:41.186 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:41.186 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:41.186 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:41.186 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:41.186 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:41.186 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.187 request: 00:26:41.187 { 00:26:41.187 "name": "NVMe0", 00:26:41.187 "trtype": "tcp", 00:26:41.187 "traddr": "10.0.0.2", 00:26:41.187 "adrfam": "ipv4", 00:26:41.187 "trsvcid": "4420", 00:26:41.187 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:41.187 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:41.187 "hostaddr": "10.0.0.1", 00:26:41.187 "prchk_reftag": false, 00:26:41.187 "prchk_guard": false, 00:26:41.187 "hdgst": false, 00:26:41.187 "ddgst": false, 00:26:41.187 "allow_unrecognized_csi": false, 00:26:41.187 "method": "bdev_nvme_attach_controller", 00:26:41.187 "req_id": 1 00:26:41.187 } 00:26:41.187 Got JSON-RPC error response 00:26:41.187 response: 00:26:41.187 { 00:26:41.187 "code": -114, 00:26:41.187 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:41.187 } 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.187 request: 00:26:41.187 { 00:26:41.187 "name": "NVMe0", 00:26:41.187 "trtype": "tcp", 00:26:41.187 "traddr": "10.0.0.2", 00:26:41.187 "adrfam": "ipv4", 00:26:41.187 "trsvcid": "4420", 00:26:41.187 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:41.187 "hostaddr": "10.0.0.1", 00:26:41.187 "prchk_reftag": false, 00:26:41.187 "prchk_guard": false, 00:26:41.187 "hdgst": false, 00:26:41.187 "ddgst": false, 00:26:41.187 "allow_unrecognized_csi": false, 00:26:41.187 "method": "bdev_nvme_attach_controller", 00:26:41.187 "req_id": 1 00:26:41.187 } 00:26:41.187 Got JSON-RPC error response 00:26:41.187 response: 00:26:41.187 { 00:26:41.187 "code": -114, 00:26:41.187 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:41.187 } 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.187 request: 00:26:41.187 { 00:26:41.187 "name": "NVMe0", 00:26:41.187 "trtype": "tcp", 00:26:41.187 "traddr": "10.0.0.2", 00:26:41.187 "adrfam": "ipv4", 00:26:41.187 "trsvcid": "4420", 00:26:41.187 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:41.187 "hostaddr": "10.0.0.1", 00:26:41.187 "prchk_reftag": false, 00:26:41.187 "prchk_guard": false, 00:26:41.187 "hdgst": false, 00:26:41.187 "ddgst": false, 00:26:41.187 "multipath": "disable", 00:26:41.187 "allow_unrecognized_csi": false, 00:26:41.187 "method": "bdev_nvme_attach_controller", 00:26:41.187 "req_id": 1 00:26:41.187 } 00:26:41.187 Got JSON-RPC error response 00:26:41.187 response: 00:26:41.187 { 00:26:41.187 "code": -114, 00:26:41.187 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:26:41.187 } 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.187 request: 00:26:41.187 { 00:26:41.187 "name": "NVMe0", 00:26:41.187 "trtype": "tcp", 00:26:41.187 "traddr": "10.0.0.2", 00:26:41.187 "adrfam": "ipv4", 00:26:41.187 "trsvcid": "4420", 00:26:41.187 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:41.187 "hostaddr": "10.0.0.1", 00:26:41.187 "prchk_reftag": false, 00:26:41.187 "prchk_guard": false, 00:26:41.187 "hdgst": false, 00:26:41.187 "ddgst": false, 00:26:41.187 "multipath": "failover", 00:26:41.187 "allow_unrecognized_csi": false, 00:26:41.187 "method": "bdev_nvme_attach_controller", 00:26:41.187 "req_id": 1 00:26:41.187 } 00:26:41.187 Got JSON-RPC error response 00:26:41.187 response: 00:26:41.187 { 00:26:41.187 "code": -114, 00:26:41.187 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:41.187 } 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.187 NVMe0n1 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.187 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.447 00:26:41.447 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.447 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:41.447 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:41.447 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.447 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.447 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.447 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:41.447 10:54:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:42.826 { 00:26:42.826 "results": [ 00:26:42.826 { 00:26:42.826 "job": "NVMe0n1", 00:26:42.826 "core_mask": "0x1", 00:26:42.826 "workload": "write", 00:26:42.826 "status": "finished", 00:26:42.826 "queue_depth": 128, 00:26:42.826 "io_size": 4096, 00:26:42.826 "runtime": 1.00472, 00:26:42.826 "iops": 28919.499960187914, 00:26:42.826 "mibps": 112.96679671948404, 00:26:42.827 "io_failed": 0, 00:26:42.827 "io_timeout": 0, 00:26:42.827 "avg_latency_us": 4416.773685756241, 00:26:42.827 "min_latency_us": 2088.96, 00:26:42.827 "max_latency_us": 9885.013333333334 00:26:42.827 } 00:26:42.827 ], 00:26:42.827 "core_count": 1 00:26:42.827 } 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1104751 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1104751 ']' 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1104751 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1104751 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1104751' 00:26:42.827 killing process with pid 1104751 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1104751 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1104751 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:26:42.827 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:42.827 [2024-11-19 10:54:19.065045] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:26:42.827 [2024-11-19 10:54:19.065126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1104751 ] 00:26:42.827 [2024-11-19 10:54:19.159621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.827 [2024-11-19 10:54:19.213377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.827 [2024-11-19 10:54:20.601927] bdev.c:4686:bdev_name_add: *ERROR*: Bdev name 205968a5-371c-4901-9ff8-8d9f5ca8ccda already exists 00:26:42.827 [2024-11-19 10:54:20.601960] bdev.c:7824:bdev_register: *ERROR*: Unable to add uuid:205968a5-371c-4901-9ff8-8d9f5ca8ccda alias for bdev NVMe1n1 00:26:42.827 [2024-11-19 10:54:20.601969] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:42.827 Running I/O for 1 seconds... 00:26:42.827 28894.00 IOPS, 112.87 MiB/s 00:26:42.827 Latency(us) 00:26:42.827 [2024-11-19T09:54:22.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.827 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:42.827 NVMe0n1 : 1.00 28919.50 112.97 0.00 0.00 4416.77 2088.96 9885.01 00:26:42.827 [2024-11-19T09:54:22.022Z] =================================================================================================================== 00:26:42.827 [2024-11-19T09:54:22.022Z] Total : 28919.50 112.97 0.00 0.00 4416.77 2088.96 9885.01 00:26:42.827 Received shutdown signal, test time was about 1.000000 seconds 00:26:42.827 00:26:42.827 Latency(us) 00:26:42.827 [2024-11-19T09:54:22.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.827 [2024-11-19T09:54:22.022Z] =================================================================================================================== 00:26:42.827 [2024-11-19T09:54:22.022Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:42.827 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:42.827 10:54:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:42.827 rmmod nvme_tcp 00:26:42.827 rmmod nvme_fabrics 00:26:43.086 rmmod nvme_keyring 00:26:43.086 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:43.086 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:26:43.086 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:26:43.086 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1104401 ']' 00:26:43.086 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1104401 00:26:43.086 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1104401 ']' 00:26:43.086 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1104401 00:26:43.086 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:26:43.086 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:43.086 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1104401 00:26:43.086 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:43.086 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:43.086 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1104401' 00:26:43.086 killing process with pid 1104401 00:26:43.086 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1104401 00:26:43.086 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1104401 00:26:43.086 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:43.086 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:43.086 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:43.087 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:26:43.087 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:43.087 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:26:43.087 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:26:43.087 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:43.087 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:43.087 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.087 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:43.087 10:54:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.646 10:54:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:45.646 00:26:45.646 real 0m14.231s 00:26:45.646 user 0m17.853s 00:26:45.646 sys 0m6.591s 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:45.647 ************************************ 00:26:45.647 END TEST nvmf_multicontroller 00:26:45.647 ************************************ 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.647 ************************************ 00:26:45.647 START TEST nvmf_aer 00:26:45.647 ************************************ 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:45.647 * Looking for test storage... 00:26:45.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:45.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.647 --rc genhtml_branch_coverage=1 00:26:45.647 --rc genhtml_function_coverage=1 00:26:45.647 --rc genhtml_legend=1 00:26:45.647 --rc geninfo_all_blocks=1 00:26:45.647 --rc geninfo_unexecuted_blocks=1 00:26:45.647 00:26:45.647 ' 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:45.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.647 --rc genhtml_branch_coverage=1 00:26:45.647 --rc genhtml_function_coverage=1 00:26:45.647 --rc genhtml_legend=1 00:26:45.647 --rc geninfo_all_blocks=1 00:26:45.647 --rc geninfo_unexecuted_blocks=1 00:26:45.647 00:26:45.647 ' 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:45.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.647 --rc genhtml_branch_coverage=1 00:26:45.647 --rc genhtml_function_coverage=1 00:26:45.647 --rc genhtml_legend=1 00:26:45.647 --rc geninfo_all_blocks=1 00:26:45.647 --rc geninfo_unexecuted_blocks=1 00:26:45.647 00:26:45.647 ' 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:45.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.647 --rc genhtml_branch_coverage=1 00:26:45.647 --rc genhtml_function_coverage=1 00:26:45.647 --rc genhtml_legend=1 00:26:45.647 --rc geninfo_all_blocks=1 00:26:45.647 --rc geninfo_unexecuted_blocks=1 00:26:45.647 00:26:45.647 ' 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:45.647 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:45.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:45.648 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:45.648 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:45.648 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:45.648 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:45.648 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:45.648 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.648 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:45.648 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:45.648 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:45.648 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.648 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.648 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.648 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:45.648 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:45.648 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:26:45.648 10:54:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:53.796 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:53.796 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:53.796 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:53.796 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.796 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.797 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:53.797 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:53.797 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.797 10:54:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:53.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:26:53.797 00:26:53.797 --- 10.0.0.2 ping statistics --- 00:26:53.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.797 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:26:53.797 00:26:53.797 --- 10.0.0.1 ping statistics --- 00:26:53.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.797 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1109432 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1109432 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1109432 ']' 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:53.797 10:54:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.797 [2024-11-19 10:54:32.266331] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:26:53.797 [2024-11-19 10:54:32.266399] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.797 [2024-11-19 10:54:32.366476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:53.797 [2024-11-19 10:54:32.420354] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.797 [2024-11-19 10:54:32.420408] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.797 [2024-11-19 10:54:32.420417] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.797 [2024-11-19 10:54:32.420425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.797 [2024-11-19 10:54:32.420431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.797 [2024-11-19 10:54:32.422456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.797 [2024-11-19 10:54:32.422620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.797 [2024-11-19 10:54:32.422781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.797 [2024-11-19 10:54:32.422782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:54.059 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.059 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:26:54.059 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:54.059 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:54.059 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.059 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.059 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:54.059 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.059 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.059 [2024-11-19 10:54:33.151103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.060 Malloc0 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.060 [2024-11-19 10:54:33.231176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.060 [ 00:26:54.060 { 00:26:54.060 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:54.060 "subtype": "Discovery", 00:26:54.060 "listen_addresses": [], 00:26:54.060 "allow_any_host": true, 00:26:54.060 "hosts": [] 00:26:54.060 }, 00:26:54.060 { 00:26:54.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:54.060 "subtype": "NVMe", 00:26:54.060 "listen_addresses": [ 00:26:54.060 { 00:26:54.060 "trtype": "TCP", 00:26:54.060 "adrfam": "IPv4", 00:26:54.060 "traddr": "10.0.0.2", 00:26:54.060 "trsvcid": "4420" 00:26:54.060 } 00:26:54.060 ], 00:26:54.060 "allow_any_host": true, 00:26:54.060 "hosts": [], 00:26:54.060 "serial_number": "SPDK00000000000001", 00:26:54.060 "model_number": "SPDK bdev Controller", 00:26:54.060 "max_namespaces": 2, 00:26:54.060 "min_cntlid": 1, 00:26:54.060 "max_cntlid": 65519, 00:26:54.060 "namespaces": [ 00:26:54.060 { 00:26:54.060 "nsid": 1, 00:26:54.060 "bdev_name": "Malloc0", 00:26:54.060 "name": "Malloc0", 00:26:54.060 "nguid": "6575DBF376484642814E6901DF125635", 00:26:54.060 "uuid": "6575dbf3-7648-4642-814e-6901df125635" 00:26:54.060 } 00:26:54.060 ] 00:26:54.060 } 00:26:54.060 ] 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:54.060 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1109780 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.322 Malloc1 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.322 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.584 Asynchronous Event Request test 00:26:54.584 Attaching to 10.0.0.2 00:26:54.584 Attached to 10.0.0.2 00:26:54.584 Registering asynchronous event callbacks... 00:26:54.584 Starting namespace attribute notice tests for all controllers... 00:26:54.584 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:54.584 aer_cb - Changed Namespace 00:26:54.584 Cleaning up... 00:26:54.584 [ 00:26:54.584 { 00:26:54.584 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:54.584 "subtype": "Discovery", 00:26:54.584 "listen_addresses": [], 00:26:54.584 "allow_any_host": true, 00:26:54.584 "hosts": [] 00:26:54.584 }, 00:26:54.584 { 00:26:54.584 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:54.584 "subtype": "NVMe", 00:26:54.584 "listen_addresses": [ 00:26:54.584 { 00:26:54.584 "trtype": "TCP", 00:26:54.584 "adrfam": "IPv4", 00:26:54.584 "traddr": "10.0.0.2", 00:26:54.584 "trsvcid": "4420" 00:26:54.584 } 00:26:54.584 ], 00:26:54.584 "allow_any_host": true, 00:26:54.584 "hosts": [], 00:26:54.584 "serial_number": "SPDK00000000000001", 00:26:54.584 "model_number": "SPDK bdev Controller", 00:26:54.584 "max_namespaces": 2, 00:26:54.584 "min_cntlid": 1, 00:26:54.584 "max_cntlid": 65519, 00:26:54.584 "namespaces": [ 00:26:54.584 { 00:26:54.584 "nsid": 1, 00:26:54.584 "bdev_name": "Malloc0", 00:26:54.584 "name": "Malloc0", 00:26:54.584 "nguid": "6575DBF376484642814E6901DF125635", 00:26:54.584 "uuid": "6575dbf3-7648-4642-814e-6901df125635" 00:26:54.584 }, 00:26:54.584 { 00:26:54.584 "nsid": 2, 00:26:54.584 "bdev_name": "Malloc1", 00:26:54.584 "name": "Malloc1", 00:26:54.584 "nguid": "62F183DE2A5843F589066B753B7B63D8", 00:26:54.584 "uuid": "62f183de-2a58-43f5-8906-6b753b7b63d8" 00:26:54.584 } 00:26:54.584 ] 00:26:54.584 } 00:26:54.584 ] 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1109780 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:54.584 rmmod nvme_tcp 00:26:54.584 rmmod nvme_fabrics 00:26:54.584 rmmod nvme_keyring 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1109432 ']' 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1109432 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1109432 ']' 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1109432 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1109432 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1109432' 00:26:54.584 killing process with pid 1109432 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1109432 00:26:54.584 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1109432 00:26:54.845 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:54.845 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:54.845 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:54.845 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:26:54.845 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:54.845 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:26:54.845 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:26:54.845 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:54.845 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:54.845 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.845 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:54.845 10:54:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.403 10:54:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:57.403 00:26:57.403 real 0m11.572s 00:26:57.403 user 0m8.218s 00:26:57.403 sys 0m6.206s 00:26:57.403 10:54:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:57.403 10:54:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:57.403 ************************************ 00:26:57.403 END TEST nvmf_aer 00:26:57.403 ************************************ 00:26:57.403 10:54:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:57.403 10:54:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:57.403 10:54:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:57.403 10:54:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.403 ************************************ 00:26:57.403 START TEST nvmf_async_init 00:26:57.403 ************************************ 00:26:57.403 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:57.403 * Looking for test storage... 00:26:57.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:57.403 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:57.403 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:26:57.403 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:57.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.404 --rc genhtml_branch_coverage=1 00:26:57.404 --rc genhtml_function_coverage=1 00:26:57.404 --rc genhtml_legend=1 00:26:57.404 --rc geninfo_all_blocks=1 00:26:57.404 --rc geninfo_unexecuted_blocks=1 00:26:57.404 00:26:57.404 ' 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:57.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.404 --rc genhtml_branch_coverage=1 00:26:57.404 --rc genhtml_function_coverage=1 00:26:57.404 --rc genhtml_legend=1 00:26:57.404 --rc geninfo_all_blocks=1 00:26:57.404 --rc geninfo_unexecuted_blocks=1 00:26:57.404 00:26:57.404 ' 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:57.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.404 --rc genhtml_branch_coverage=1 00:26:57.404 --rc genhtml_function_coverage=1 00:26:57.404 --rc genhtml_legend=1 00:26:57.404 --rc geninfo_all_blocks=1 00:26:57.404 --rc geninfo_unexecuted_blocks=1 00:26:57.404 00:26:57.404 ' 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:57.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.404 --rc genhtml_branch_coverage=1 00:26:57.404 --rc genhtml_function_coverage=1 00:26:57.404 --rc genhtml_legend=1 00:26:57.404 --rc geninfo_all_blocks=1 00:26:57.404 --rc geninfo_unexecuted_blocks=1 00:26:57.404 00:26:57.404 ' 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:57.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=47d161d25d894e95a68113fd9558b38d 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:26:57.404 10:54:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:05.552 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:05.552 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:05.552 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:05.552 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:05.552 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:05.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:27:05.553 00:27:05.553 --- 10.0.0.2 ping statistics --- 00:27:05.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.553 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:27:05.553 00:27:05.553 --- 10.0.0.1 ping statistics --- 00:27:05.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.553 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1114027 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1114027 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1114027 ']' 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:05.553 10:54:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.553 [2024-11-19 10:54:43.959587] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:27:05.553 [2024-11-19 10:54:43.959663] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.553 [2024-11-19 10:54:44.059780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.553 [2024-11-19 10:54:44.110610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.553 [2024-11-19 10:54:44.110661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.553 [2024-11-19 10:54:44.110670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.553 [2024-11-19 10:54:44.110677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.553 [2024-11-19 10:54:44.110683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.553 [2024-11-19 10:54:44.111473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.814 [2024-11-19 10:54:44.828947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.814 null0 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 47d161d25d894e95a68113fd9558b38d 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.814 [2024-11-19 10:54:44.889308] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.814 10:54:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.075 nvme0n1 00:27:06.075 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.075 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:06.075 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.075 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.075 [ 00:27:06.075 { 00:27:06.075 "name": "nvme0n1", 00:27:06.075 "aliases": [ 00:27:06.075 "47d161d2-5d89-4e95-a681-13fd9558b38d" 00:27:06.075 ], 00:27:06.075 "product_name": "NVMe disk", 00:27:06.075 "block_size": 512, 00:27:06.075 "num_blocks": 2097152, 00:27:06.075 "uuid": "47d161d2-5d89-4e95-a681-13fd9558b38d", 00:27:06.075 "numa_id": 0, 00:27:06.075 "assigned_rate_limits": { 00:27:06.075 "rw_ios_per_sec": 0, 00:27:06.075 "rw_mbytes_per_sec": 0, 00:27:06.075 "r_mbytes_per_sec": 0, 00:27:06.075 "w_mbytes_per_sec": 0 00:27:06.075 }, 00:27:06.075 "claimed": false, 00:27:06.075 "zoned": false, 00:27:06.075 "supported_io_types": { 00:27:06.075 "read": true, 00:27:06.075 "write": true, 00:27:06.075 "unmap": false, 00:27:06.075 "flush": true, 00:27:06.075 "reset": true, 00:27:06.075 "nvme_admin": true, 00:27:06.075 "nvme_io": true, 00:27:06.075 "nvme_io_md": false, 00:27:06.075 "write_zeroes": true, 00:27:06.075 "zcopy": false, 00:27:06.075 "get_zone_info": false, 00:27:06.075 "zone_management": false, 00:27:06.075 "zone_append": false, 00:27:06.075 "compare": true, 00:27:06.075 "compare_and_write": true, 00:27:06.075 "abort": true, 00:27:06.075 "seek_hole": false, 00:27:06.075 "seek_data": false, 00:27:06.075 "copy": true, 00:27:06.075 "nvme_iov_md": false 00:27:06.075 }, 00:27:06.075 "memory_domains": [ 00:27:06.075 { 00:27:06.075 "dma_device_id": "system", 00:27:06.075 "dma_device_type": 1 00:27:06.075 } 00:27:06.075 ], 00:27:06.075 "driver_specific": { 00:27:06.075 "nvme": [ 00:27:06.075 { 00:27:06.075 "trid": { 00:27:06.075 "trtype": "TCP", 00:27:06.075 "adrfam": "IPv4", 00:27:06.075 "traddr": "10.0.0.2", 00:27:06.075 "trsvcid": "4420", 00:27:06.075 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:06.075 }, 00:27:06.075 "ctrlr_data": { 00:27:06.075 "cntlid": 1, 00:27:06.075 "vendor_id": "0x8086", 00:27:06.075 "model_number": "SPDK bdev Controller", 00:27:06.075 "serial_number": "00000000000000000000", 00:27:06.075 "firmware_revision": "25.01", 00:27:06.075 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:06.075 "oacs": { 00:27:06.075 "security": 0, 00:27:06.075 "format": 0, 00:27:06.075 "firmware": 0, 00:27:06.075 "ns_manage": 0 00:27:06.075 }, 00:27:06.075 "multi_ctrlr": true, 00:27:06.075 "ana_reporting": false 00:27:06.075 }, 00:27:06.075 "vs": { 00:27:06.075 "nvme_version": "1.3" 00:27:06.075 }, 00:27:06.075 "ns_data": { 00:27:06.075 "id": 1, 00:27:06.075 "can_share": true 00:27:06.075 } 00:27:06.075 } 00:27:06.075 ], 00:27:06.075 "mp_policy": "active_passive" 00:27:06.075 } 00:27:06.075 } 00:27:06.075 ] 00:27:06.075 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.076 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:06.076 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.076 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.076 [2024-11-19 10:54:45.163976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:06.076 [2024-11-19 10:54:45.164061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1212ce0 (9): Bad file descriptor 00:27:06.337 [2024-11-19 10:54:45.296269] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.337 [ 00:27:06.337 { 00:27:06.337 "name": "nvme0n1", 00:27:06.337 "aliases": [ 00:27:06.337 "47d161d2-5d89-4e95-a681-13fd9558b38d" 00:27:06.337 ], 00:27:06.337 "product_name": "NVMe disk", 00:27:06.337 "block_size": 512, 00:27:06.337 "num_blocks": 2097152, 00:27:06.337 "uuid": "47d161d2-5d89-4e95-a681-13fd9558b38d", 00:27:06.337 "numa_id": 0, 00:27:06.337 "assigned_rate_limits": { 00:27:06.337 "rw_ios_per_sec": 0, 00:27:06.337 "rw_mbytes_per_sec": 0, 00:27:06.337 "r_mbytes_per_sec": 0, 00:27:06.337 "w_mbytes_per_sec": 0 00:27:06.337 }, 00:27:06.337 "claimed": false, 00:27:06.337 "zoned": false, 00:27:06.337 "supported_io_types": { 00:27:06.337 "read": true, 00:27:06.337 "write": true, 00:27:06.337 "unmap": false, 00:27:06.337 "flush": true, 00:27:06.337 "reset": true, 00:27:06.337 "nvme_admin": true, 00:27:06.337 "nvme_io": true, 00:27:06.337 "nvme_io_md": false, 00:27:06.337 "write_zeroes": true, 00:27:06.337 "zcopy": false, 00:27:06.337 "get_zone_info": false, 00:27:06.337 "zone_management": false, 00:27:06.337 "zone_append": false, 00:27:06.337 "compare": true, 00:27:06.337 "compare_and_write": true, 00:27:06.337 "abort": true, 00:27:06.337 "seek_hole": false, 00:27:06.337 "seek_data": false, 00:27:06.337 "copy": true, 00:27:06.337 "nvme_iov_md": false 00:27:06.337 }, 00:27:06.337 "memory_domains": [ 00:27:06.337 { 00:27:06.337 "dma_device_id": "system", 00:27:06.337 "dma_device_type": 1 00:27:06.337 } 00:27:06.337 ], 00:27:06.337 "driver_specific": { 00:27:06.337 "nvme": [ 00:27:06.337 { 00:27:06.337 "trid": { 00:27:06.337 "trtype": "TCP", 00:27:06.337 "adrfam": "IPv4", 00:27:06.337 "traddr": "10.0.0.2", 00:27:06.337 "trsvcid": "4420", 00:27:06.337 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:06.337 }, 00:27:06.337 "ctrlr_data": { 00:27:06.337 "cntlid": 2, 00:27:06.337 "vendor_id": "0x8086", 00:27:06.337 "model_number": "SPDK bdev Controller", 00:27:06.337 "serial_number": "00000000000000000000", 00:27:06.337 "firmware_revision": "25.01", 00:27:06.337 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:06.337 "oacs": { 00:27:06.337 "security": 0, 00:27:06.337 "format": 0, 00:27:06.337 "firmware": 0, 00:27:06.337 "ns_manage": 0 00:27:06.337 }, 00:27:06.337 "multi_ctrlr": true, 00:27:06.337 "ana_reporting": false 00:27:06.337 }, 00:27:06.337 "vs": { 00:27:06.337 "nvme_version": "1.3" 00:27:06.337 }, 00:27:06.337 "ns_data": { 00:27:06.337 "id": 1, 00:27:06.337 "can_share": true 00:27:06.337 } 00:27:06.337 } 00:27:06.337 ], 00:27:06.337 "mp_policy": "active_passive" 00:27:06.337 } 00:27:06.337 } 00:27:06.337 ] 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.H2jWhmFDEs 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.H2jWhmFDEs 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.H2jWhmFDEs 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.337 [2024-11-19 10:54:45.384651] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:06.337 [2024-11-19 10:54:45.384814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.337 [2024-11-19 10:54:45.408727] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:06.337 nvme0n1 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.337 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.337 [ 00:27:06.337 { 00:27:06.337 "name": "nvme0n1", 00:27:06.337 "aliases": [ 00:27:06.337 "47d161d2-5d89-4e95-a681-13fd9558b38d" 00:27:06.337 ], 00:27:06.337 "product_name": "NVMe disk", 00:27:06.337 "block_size": 512, 00:27:06.337 "num_blocks": 2097152, 00:27:06.337 "uuid": "47d161d2-5d89-4e95-a681-13fd9558b38d", 00:27:06.337 "numa_id": 0, 00:27:06.337 "assigned_rate_limits": { 00:27:06.337 "rw_ios_per_sec": 0, 00:27:06.337 "rw_mbytes_per_sec": 0, 00:27:06.337 "r_mbytes_per_sec": 0, 00:27:06.337 "w_mbytes_per_sec": 0 00:27:06.337 }, 00:27:06.337 "claimed": false, 00:27:06.337 "zoned": false, 00:27:06.337 "supported_io_types": { 00:27:06.337 "read": true, 00:27:06.337 "write": true, 00:27:06.337 "unmap": false, 00:27:06.337 "flush": true, 00:27:06.337 "reset": true, 00:27:06.337 "nvme_admin": true, 00:27:06.337 "nvme_io": true, 00:27:06.337 "nvme_io_md": false, 00:27:06.337 "write_zeroes": true, 00:27:06.337 "zcopy": false, 00:27:06.337 "get_zone_info": false, 00:27:06.337 "zone_management": false, 00:27:06.337 "zone_append": false, 00:27:06.337 "compare": true, 00:27:06.337 "compare_and_write": true, 00:27:06.337 "abort": true, 00:27:06.337 "seek_hole": false, 00:27:06.337 "seek_data": false, 00:27:06.337 "copy": true, 00:27:06.337 "nvme_iov_md": false 00:27:06.337 }, 00:27:06.337 "memory_domains": [ 00:27:06.337 { 00:27:06.337 "dma_device_id": "system", 00:27:06.337 "dma_device_type": 1 00:27:06.337 } 00:27:06.337 ], 00:27:06.337 "driver_specific": { 00:27:06.337 "nvme": [ 00:27:06.337 { 00:27:06.337 "trid": { 00:27:06.337 "trtype": "TCP", 00:27:06.337 "adrfam": "IPv4", 00:27:06.337 "traddr": "10.0.0.2", 00:27:06.337 "trsvcid": "4421", 00:27:06.338 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:06.338 }, 00:27:06.338 "ctrlr_data": { 00:27:06.338 "cntlid": 3, 00:27:06.338 "vendor_id": "0x8086", 00:27:06.338 "model_number": "SPDK bdev Controller", 00:27:06.338 "serial_number": "00000000000000000000", 00:27:06.338 "firmware_revision": "25.01", 00:27:06.338 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:06.338 "oacs": { 00:27:06.338 "security": 0, 00:27:06.338 "format": 0, 00:27:06.338 "firmware": 0, 00:27:06.338 "ns_manage": 0 00:27:06.338 }, 00:27:06.338 "multi_ctrlr": true, 00:27:06.338 "ana_reporting": false 00:27:06.338 }, 00:27:06.338 "vs": { 00:27:06.338 "nvme_version": "1.3" 00:27:06.338 }, 00:27:06.338 "ns_data": { 00:27:06.338 "id": 1, 00:27:06.338 "can_share": true 00:27:06.338 } 00:27:06.338 } 00:27:06.338 ], 00:27:06.338 "mp_policy": "active_passive" 00:27:06.338 } 00:27:06.338 } 00:27:06.338 ] 00:27:06.338 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.338 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.338 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.338 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.338 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.338 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.H2jWhmFDEs 00:27:06.338 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:27:06.338 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:27:06.338 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:06.338 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:27:06.598 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:06.598 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:27:06.598 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:06.598 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:06.598 rmmod nvme_tcp 00:27:06.598 rmmod nvme_fabrics 00:27:06.598 rmmod nvme_keyring 00:27:06.598 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:06.598 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:27:06.598 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:27:06.598 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1114027 ']' 00:27:06.598 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1114027 00:27:06.598 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1114027 ']' 00:27:06.598 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1114027 00:27:06.598 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:27:06.598 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:06.598 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1114027 00:27:06.598 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:06.598 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:06.598 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1114027' 00:27:06.598 killing process with pid 1114027 00:27:06.598 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1114027 00:27:06.598 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1114027 00:27:06.859 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:06.859 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:06.859 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:06.859 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:27:06.859 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:27:06.859 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:06.859 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:27:06.859 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:06.859 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:06.859 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.859 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.859 10:54:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.769 10:54:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:08.769 00:27:08.769 real 0m11.825s 00:27:08.769 user 0m4.284s 00:27:08.769 sys 0m6.125s 00:27:08.769 10:54:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:08.769 10:54:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:08.769 ************************************ 00:27:08.769 END TEST nvmf_async_init 00:27:08.769 ************************************ 00:27:08.769 10:54:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:08.769 10:54:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:08.769 10:54:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:08.769 10:54:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.031 ************************************ 00:27:09.031 START TEST dma 00:27:09.031 ************************************ 00:27:09.031 10:54:47 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:09.031 * Looking for test storage... 00:27:09.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:09.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.031 --rc genhtml_branch_coverage=1 00:27:09.031 --rc genhtml_function_coverage=1 00:27:09.031 --rc genhtml_legend=1 00:27:09.031 --rc geninfo_all_blocks=1 00:27:09.031 --rc geninfo_unexecuted_blocks=1 00:27:09.031 00:27:09.031 ' 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:09.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.031 --rc genhtml_branch_coverage=1 00:27:09.031 --rc genhtml_function_coverage=1 00:27:09.031 --rc genhtml_legend=1 00:27:09.031 --rc geninfo_all_blocks=1 00:27:09.031 --rc geninfo_unexecuted_blocks=1 00:27:09.031 00:27:09.031 ' 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:09.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.031 --rc genhtml_branch_coverage=1 00:27:09.031 --rc genhtml_function_coverage=1 00:27:09.031 --rc genhtml_legend=1 00:27:09.031 --rc geninfo_all_blocks=1 00:27:09.031 --rc geninfo_unexecuted_blocks=1 00:27:09.031 00:27:09.031 ' 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:09.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.031 --rc genhtml_branch_coverage=1 00:27:09.031 --rc genhtml_function_coverage=1 00:27:09.031 --rc genhtml_legend=1 00:27:09.031 --rc geninfo_all_blocks=1 00:27:09.031 --rc geninfo_unexecuted_blocks=1 00:27:09.031 00:27:09.031 ' 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:09.031 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:09.032 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:09.032 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:09.032 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:09.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:09.032 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:09.032 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:09.032 10:54:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:09.293 10:54:48 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:09.293 10:54:48 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:27:09.293 00:27:09.293 real 0m0.235s 00:27:09.293 user 0m0.142s 00:27:09.293 sys 0m0.109s 00:27:09.293 10:54:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.293 10:54:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:09.293 ************************************ 00:27:09.293 END TEST dma 00:27:09.293 ************************************ 00:27:09.293 10:54:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:09.293 10:54:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:09.293 10:54:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:09.293 10:54:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.293 ************************************ 00:27:09.293 START TEST nvmf_identify 00:27:09.293 ************************************ 00:27:09.293 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:09.293 * Looking for test storage... 00:27:09.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:09.293 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:09.293 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:27:09.293 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:09.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.555 --rc genhtml_branch_coverage=1 00:27:09.555 --rc genhtml_function_coverage=1 00:27:09.555 --rc genhtml_legend=1 00:27:09.555 --rc geninfo_all_blocks=1 00:27:09.555 --rc geninfo_unexecuted_blocks=1 00:27:09.555 00:27:09.555 ' 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:09.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.555 --rc genhtml_branch_coverage=1 00:27:09.555 --rc genhtml_function_coverage=1 00:27:09.555 --rc genhtml_legend=1 00:27:09.555 --rc geninfo_all_blocks=1 00:27:09.555 --rc geninfo_unexecuted_blocks=1 00:27:09.555 00:27:09.555 ' 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:09.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.555 --rc genhtml_branch_coverage=1 00:27:09.555 --rc genhtml_function_coverage=1 00:27:09.555 --rc genhtml_legend=1 00:27:09.555 --rc geninfo_all_blocks=1 00:27:09.555 --rc geninfo_unexecuted_blocks=1 00:27:09.555 00:27:09.555 ' 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:09.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.555 --rc genhtml_branch_coverage=1 00:27:09.555 --rc genhtml_function_coverage=1 00:27:09.555 --rc genhtml_legend=1 00:27:09.555 --rc geninfo_all_blocks=1 00:27:09.555 --rc geninfo_unexecuted_blocks=1 00:27:09.555 00:27:09.555 ' 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:09.555 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:09.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:27:09.556 10:54:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:17.700 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:17.700 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:17.700 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:17.701 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:17.701 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:17.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:17.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:27:17.701 00:27:17.701 --- 10.0.0.2 ping statistics --- 00:27:17.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.701 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:17.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:17.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:27:17.701 00:27:17.701 --- 10.0.0.1 ping statistics --- 00:27:17.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.701 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:17.701 10:54:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:17.701 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:17.701 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:17.701 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.701 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1118531 00:27:17.701 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:17.701 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:17.701 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1118531 00:27:17.701 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1118531 ']' 00:27:17.701 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.701 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:17.701 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.701 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:17.701 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.701 [2024-11-19 10:54:56.108485] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:27:17.701 [2024-11-19 10:54:56.108553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.701 [2024-11-19 10:54:56.208686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:17.701 [2024-11-19 10:54:56.264036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.701 [2024-11-19 10:54:56.264088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.701 [2024-11-19 10:54:56.264097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.701 [2024-11-19 10:54:56.264104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.701 [2024-11-19 10:54:56.264111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.701 [2024-11-19 10:54:56.266515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.701 [2024-11-19 10:54:56.266675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.701 [2024-11-19 10:54:56.266836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.701 [2024-11-19 10:54:56.266837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.963 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:17.963 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:27:17.963 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:17.963 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.963 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.963 [2024-11-19 10:54:56.947074] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.963 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.963 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:17.963 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:17.963 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.963 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:17.963 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.963 10:54:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.963 Malloc0 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.963 [2024-11-19 10:54:57.063204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.963 [ 00:27:17.963 { 00:27:17.963 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:17.963 "subtype": "Discovery", 00:27:17.963 "listen_addresses": [ 00:27:17.963 { 00:27:17.963 "trtype": "TCP", 00:27:17.963 "adrfam": "IPv4", 00:27:17.963 "traddr": "10.0.0.2", 00:27:17.963 "trsvcid": "4420" 00:27:17.963 } 00:27:17.963 ], 00:27:17.963 "allow_any_host": true, 00:27:17.963 "hosts": [] 00:27:17.963 }, 00:27:17.963 { 00:27:17.963 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:17.963 "subtype": "NVMe", 00:27:17.963 "listen_addresses": [ 00:27:17.963 { 00:27:17.963 "trtype": "TCP", 00:27:17.963 "adrfam": "IPv4", 00:27:17.963 "traddr": "10.0.0.2", 00:27:17.963 "trsvcid": "4420" 00:27:17.963 } 00:27:17.963 ], 00:27:17.963 "allow_any_host": true, 00:27:17.963 "hosts": [], 00:27:17.963 "serial_number": "SPDK00000000000001", 00:27:17.963 "model_number": "SPDK bdev Controller", 00:27:17.963 "max_namespaces": 32, 00:27:17.963 "min_cntlid": 1, 00:27:17.963 "max_cntlid": 65519, 00:27:17.963 "namespaces": [ 00:27:17.963 { 00:27:17.963 "nsid": 1, 00:27:17.963 "bdev_name": "Malloc0", 00:27:17.963 "name": "Malloc0", 00:27:17.963 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:17.963 "eui64": "ABCDEF0123456789", 00:27:17.963 "uuid": "921537c3-257e-4352-9c35-4d079e6ade23" 00:27:17.963 } 00:27:17.963 ] 00:27:17.963 } 00:27:17.963 ] 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.963 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:17.963 [2024-11-19 10:54:57.127040] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:27:17.964 [2024-11-19 10:54:57.127090] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1118873 ] 00:27:18.228 [2024-11-19 10:54:57.181269] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:27:18.228 [2024-11-19 10:54:57.181343] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:18.228 [2024-11-19 10:54:57.181349] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:18.228 [2024-11-19 10:54:57.181363] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:18.228 [2024-11-19 10:54:57.181377] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:18.228 [2024-11-19 10:54:57.185561] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:27:18.228 [2024-11-19 10:54:57.185616] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe28690 0 00:27:18.228 [2024-11-19 10:54:57.193182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:18.228 [2024-11-19 10:54:57.193200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:18.228 [2024-11-19 10:54:57.193205] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:18.228 [2024-11-19 10:54:57.193209] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:18.228 [2024-11-19 10:54:57.193255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.228 [2024-11-19 10:54:57.193262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.228 [2024-11-19 10:54:57.193267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe28690) 00:27:18.228 [2024-11-19 10:54:57.193283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:18.228 [2024-11-19 10:54:57.193308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a100, cid 0, qid 0 00:27:18.228 [2024-11-19 10:54:57.204174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.228 [2024-11-19 10:54:57.204186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.228 [2024-11-19 10:54:57.204190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.228 [2024-11-19 10:54:57.204195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a100) on tqpair=0xe28690 00:27:18.228 [2024-11-19 10:54:57.204209] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:18.228 [2024-11-19 10:54:57.204218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:27:18.228 [2024-11-19 10:54:57.204224] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:27:18.228 [2024-11-19 10:54:57.204242] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.228 [2024-11-19 10:54:57.204252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.228 [2024-11-19 10:54:57.204256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe28690) 00:27:18.228 [2024-11-19 10:54:57.204266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.228 [2024-11-19 10:54:57.204282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a100, cid 0, qid 0 00:27:18.228 [2024-11-19 10:54:57.204498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.228 [2024-11-19 10:54:57.204505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.228 [2024-11-19 10:54:57.204508] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.228 [2024-11-19 10:54:57.204512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a100) on tqpair=0xe28690 00:27:18.228 [2024-11-19 10:54:57.204519] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:27:18.228 [2024-11-19 10:54:57.204526] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:27:18.228 [2024-11-19 10:54:57.204534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.204538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.204541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe28690) 00:27:18.229 [2024-11-19 10:54:57.204548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.229 [2024-11-19 10:54:57.204559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a100, cid 0, qid 0 00:27:18.229 [2024-11-19 10:54:57.204763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.229 [2024-11-19 10:54:57.204769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.229 [2024-11-19 10:54:57.204772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.204776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a100) on tqpair=0xe28690 00:27:18.229 [2024-11-19 10:54:57.204782] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:27:18.229 [2024-11-19 10:54:57.204791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:27:18.229 [2024-11-19 10:54:57.204798] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.204801] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.204805] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe28690) 00:27:18.229 [2024-11-19 10:54:57.204812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.229 [2024-11-19 10:54:57.204822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a100, cid 0, qid 0 00:27:18.229 [2024-11-19 10:54:57.204996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.229 [2024-11-19 10:54:57.205003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.229 [2024-11-19 10:54:57.205006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.205010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a100) on tqpair=0xe28690 00:27:18.229 [2024-11-19 10:54:57.205016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:18.229 [2024-11-19 10:54:57.205026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.205030] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.205033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe28690) 00:27:18.229 [2024-11-19 10:54:57.205043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.229 [2024-11-19 10:54:57.205054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a100, cid 0, qid 0 00:27:18.229 [2024-11-19 10:54:57.205258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.229 [2024-11-19 10:54:57.205264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.229 [2024-11-19 10:54:57.205268] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.205272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a100) on tqpair=0xe28690 00:27:18.229 [2024-11-19 10:54:57.205277] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:27:18.229 [2024-11-19 10:54:57.205282] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:27:18.229 [2024-11-19 10:54:57.205290] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:18.229 [2024-11-19 10:54:57.205402] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:27:18.229 [2024-11-19 10:54:57.205407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:18.229 [2024-11-19 10:54:57.205417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.205421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.205424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe28690) 00:27:18.229 [2024-11-19 10:54:57.205431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.229 [2024-11-19 10:54:57.205442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a100, cid 0, qid 0 00:27:18.229 [2024-11-19 10:54:57.205646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.229 [2024-11-19 10:54:57.205652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.229 [2024-11-19 10:54:57.205656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.205660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a100) on tqpair=0xe28690 00:27:18.229 [2024-11-19 10:54:57.205664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:18.229 [2024-11-19 10:54:57.205674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.205678] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.205682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe28690) 00:27:18.229 [2024-11-19 10:54:57.205689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.229 [2024-11-19 10:54:57.205699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a100, cid 0, qid 0 00:27:18.229 [2024-11-19 10:54:57.205870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.229 [2024-11-19 10:54:57.205876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.229 [2024-11-19 10:54:57.205880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.205884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a100) on tqpair=0xe28690 00:27:18.229 [2024-11-19 10:54:57.205888] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:18.229 [2024-11-19 10:54:57.205893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:27:18.229 [2024-11-19 10:54:57.205903] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:27:18.229 [2024-11-19 10:54:57.205918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:27:18.229 [2024-11-19 10:54:57.205928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.205932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe28690) 00:27:18.229 [2024-11-19 10:54:57.205939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.229 [2024-11-19 10:54:57.205949] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a100, cid 0, qid 0 00:27:18.229 [2024-11-19 10:54:57.206264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:18.229 [2024-11-19 10:54:57.206270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:18.229 [2024-11-19 10:54:57.206274] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.206279] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe28690): datao=0, datal=4096, cccid=0 00:27:18.229 [2024-11-19 10:54:57.206283] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe8a100) on tqpair(0xe28690): expected_datao=0, payload_size=4096 00:27:18.229 [2024-11-19 10:54:57.206288] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.206296] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.206301] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.206447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.229 [2024-11-19 10:54:57.206453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.229 [2024-11-19 10:54:57.206457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.206461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a100) on tqpair=0xe28690 00:27:18.229 [2024-11-19 10:54:57.206470] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:27:18.229 [2024-11-19 10:54:57.206475] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:27:18.229 [2024-11-19 10:54:57.206480] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:27:18.229 [2024-11-19 10:54:57.206488] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:27:18.229 [2024-11-19 10:54:57.206493] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:27:18.229 [2024-11-19 10:54:57.206498] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:27:18.229 [2024-11-19 10:54:57.206509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:27:18.229 [2024-11-19 10:54:57.206516] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.206520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.206524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe28690) 00:27:18.229 [2024-11-19 10:54:57.206531] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:18.229 [2024-11-19 10:54:57.206543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a100, cid 0, qid 0 00:27:18.229 [2024-11-19 10:54:57.206741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.229 [2024-11-19 10:54:57.206747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.229 [2024-11-19 10:54:57.206751] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.206757] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a100) on tqpair=0xe28690 00:27:18.229 [2024-11-19 10:54:57.206765] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.206769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.206773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe28690) 00:27:18.229 [2024-11-19 10:54:57.206779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.229 [2024-11-19 10:54:57.206786] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.206789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.229 [2024-11-19 10:54:57.206793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe28690) 00:27:18.230 [2024-11-19 10:54:57.206799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.230 [2024-11-19 10:54:57.206805] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.206809] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.206813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe28690) 00:27:18.230 [2024-11-19 10:54:57.206818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.230 [2024-11-19 10:54:57.206824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.206828] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.206832] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe28690) 00:27:18.230 [2024-11-19 10:54:57.206837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.230 [2024-11-19 10:54:57.206842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:18.230 [2024-11-19 10:54:57.206851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:18.230 [2024-11-19 10:54:57.206857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.206861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe28690) 00:27:18.230 [2024-11-19 10:54:57.206868] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.230 [2024-11-19 10:54:57.206880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a100, cid 0, qid 0 00:27:18.230 [2024-11-19 10:54:57.206885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a280, cid 1, qid 0 00:27:18.230 [2024-11-19 10:54:57.206890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a400, cid 2, qid 0 00:27:18.230 [2024-11-19 10:54:57.206894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a580, cid 3, qid 0 00:27:18.230 [2024-11-19 10:54:57.206899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a700, cid 4, qid 0 00:27:18.230 [2024-11-19 10:54:57.207145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.230 [2024-11-19 10:54:57.207152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.230 [2024-11-19 10:54:57.207155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.207167] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a700) on tqpair=0xe28690 00:27:18.230 [2024-11-19 10:54:57.207176] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:27:18.230 [2024-11-19 10:54:57.207181] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:27:18.230 [2024-11-19 10:54:57.207195] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.207199] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe28690) 00:27:18.230 [2024-11-19 10:54:57.207206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.230 [2024-11-19 10:54:57.207216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a700, cid 4, qid 0 00:27:18.230 [2024-11-19 10:54:57.207403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:18.230 [2024-11-19 10:54:57.207409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:18.230 [2024-11-19 10:54:57.207413] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.207416] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe28690): datao=0, datal=4096, cccid=4 00:27:18.230 [2024-11-19 10:54:57.207421] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe8a700) on tqpair(0xe28690): expected_datao=0, payload_size=4096 00:27:18.230 [2024-11-19 10:54:57.207425] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.207443] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.207447] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.207591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.230 [2024-11-19 10:54:57.207597] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.230 [2024-11-19 10:54:57.207601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.207605] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a700) on tqpair=0xe28690 00:27:18.230 [2024-11-19 10:54:57.207618] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:27:18.230 [2024-11-19 10:54:57.207645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.207650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe28690) 00:27:18.230 [2024-11-19 10:54:57.207657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.230 [2024-11-19 10:54:57.207664] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.207668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.207672] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe28690) 00:27:18.230 [2024-11-19 10:54:57.207678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.230 [2024-11-19 10:54:57.207693] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a700, cid 4, qid 0 00:27:18.230 [2024-11-19 10:54:57.207698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a880, cid 5, qid 0 00:27:18.230 [2024-11-19 10:54:57.207942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:18.230 [2024-11-19 10:54:57.207948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:18.230 [2024-11-19 10:54:57.207952] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.207956] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe28690): datao=0, datal=1024, cccid=4 00:27:18.230 [2024-11-19 10:54:57.207960] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe8a700) on tqpair(0xe28690): expected_datao=0, payload_size=1024 00:27:18.230 [2024-11-19 10:54:57.207965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.207971] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.207975] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.207981] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.230 [2024-11-19 10:54:57.207989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.230 [2024-11-19 10:54:57.207993] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.207997] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a880) on tqpair=0xe28690 00:27:18.230 [2024-11-19 10:54:57.252172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.230 [2024-11-19 10:54:57.252184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.230 [2024-11-19 10:54:57.252187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.252191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a700) on tqpair=0xe28690 00:27:18.230 [2024-11-19 10:54:57.252206] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.252210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe28690) 00:27:18.230 [2024-11-19 10:54:57.252219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.230 [2024-11-19 10:54:57.252237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a700, cid 4, qid 0 00:27:18.230 [2024-11-19 10:54:57.252461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:18.230 [2024-11-19 10:54:57.252467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:18.230 [2024-11-19 10:54:57.252470] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.252474] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe28690): datao=0, datal=3072, cccid=4 00:27:18.230 [2024-11-19 10:54:57.252479] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe8a700) on tqpair(0xe28690): expected_datao=0, payload_size=3072 00:27:18.230 [2024-11-19 10:54:57.252483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.252501] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.252505] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.296170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.230 [2024-11-19 10:54:57.296181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.230 [2024-11-19 10:54:57.296184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.296188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a700) on tqpair=0xe28690 00:27:18.230 [2024-11-19 10:54:57.296201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.296205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe28690) 00:27:18.230 [2024-11-19 10:54:57.296213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.230 [2024-11-19 10:54:57.296231] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a700, cid 4, qid 0 00:27:18.230 [2024-11-19 10:54:57.296414] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:18.230 [2024-11-19 10:54:57.296421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:18.230 [2024-11-19 10:54:57.296424] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.296428] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe28690): datao=0, datal=8, cccid=4 00:27:18.230 [2024-11-19 10:54:57.296433] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe8a700) on tqpair(0xe28690): expected_datao=0, payload_size=8 00:27:18.230 [2024-11-19 10:54:57.296437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.296444] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.296447] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.339172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.230 [2024-11-19 10:54:57.339181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.230 [2024-11-19 10:54:57.339190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.230 [2024-11-19 10:54:57.339194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a700) on tqpair=0xe28690 00:27:18.231 ===================================================== 00:27:18.231 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:18.231 ===================================================== 00:27:18.231 Controller Capabilities/Features 00:27:18.231 ================================ 00:27:18.231 Vendor ID: 0000 00:27:18.231 Subsystem Vendor ID: 0000 00:27:18.231 Serial Number: .................... 00:27:18.231 Model Number: ........................................ 00:27:18.231 Firmware Version: 25.01 00:27:18.231 Recommended Arb Burst: 0 00:27:18.231 IEEE OUI Identifier: 00 00 00 00:27:18.231 Multi-path I/O 00:27:18.231 May have multiple subsystem ports: No 00:27:18.231 May have multiple controllers: No 00:27:18.231 Associated with SR-IOV VF: No 00:27:18.231 Max Data Transfer Size: 131072 00:27:18.231 Max Number of Namespaces: 0 00:27:18.231 Max Number of I/O Queues: 1024 00:27:18.231 NVMe Specification Version (VS): 1.3 00:27:18.231 NVMe Specification Version (Identify): 1.3 00:27:18.231 Maximum Queue Entries: 128 00:27:18.231 Contiguous Queues Required: Yes 00:27:18.231 Arbitration Mechanisms Supported 00:27:18.231 Weighted Round Robin: Not Supported 00:27:18.231 Vendor Specific: Not Supported 00:27:18.231 Reset Timeout: 15000 ms 00:27:18.231 Doorbell Stride: 4 bytes 00:27:18.231 NVM Subsystem Reset: Not Supported 00:27:18.231 Command Sets Supported 00:27:18.231 NVM Command Set: Supported 00:27:18.231 Boot Partition: Not Supported 00:27:18.231 Memory Page Size Minimum: 4096 bytes 00:27:18.231 Memory Page Size Maximum: 4096 bytes 00:27:18.231 Persistent Memory Region: Not Supported 00:27:18.231 Optional Asynchronous Events Supported 00:27:18.231 Namespace Attribute Notices: Not Supported 00:27:18.231 Firmware Activation Notices: Not Supported 00:27:18.231 ANA Change Notices: Not Supported 00:27:18.231 PLE Aggregate Log Change Notices: Not Supported 00:27:18.231 LBA Status Info Alert Notices: Not Supported 00:27:18.231 EGE Aggregate Log Change Notices: Not Supported 00:27:18.231 Normal NVM Subsystem Shutdown event: Not Supported 00:27:18.231 Zone Descriptor Change Notices: Not Supported 00:27:18.231 Discovery Log Change Notices: Supported 00:27:18.231 Controller Attributes 00:27:18.231 128-bit Host Identifier: Not Supported 00:27:18.231 Non-Operational Permissive Mode: Not Supported 00:27:18.231 NVM Sets: Not Supported 00:27:18.231 Read Recovery Levels: Not Supported 00:27:18.231 Endurance Groups: Not Supported 00:27:18.231 Predictable Latency Mode: Not Supported 00:27:18.231 Traffic Based Keep ALive: Not Supported 00:27:18.231 Namespace Granularity: Not Supported 00:27:18.231 SQ Associations: Not Supported 00:27:18.231 UUID List: Not Supported 00:27:18.231 Multi-Domain Subsystem: Not Supported 00:27:18.231 Fixed Capacity Management: Not Supported 00:27:18.231 Variable Capacity Management: Not Supported 00:27:18.231 Delete Endurance Group: Not Supported 00:27:18.231 Delete NVM Set: Not Supported 00:27:18.231 Extended LBA Formats Supported: Not Supported 00:27:18.231 Flexible Data Placement Supported: Not Supported 00:27:18.231 00:27:18.231 Controller Memory Buffer Support 00:27:18.231 ================================ 00:27:18.231 Supported: No 00:27:18.231 00:27:18.231 Persistent Memory Region Support 00:27:18.231 ================================ 00:27:18.231 Supported: No 00:27:18.231 00:27:18.231 Admin Command Set Attributes 00:27:18.231 ============================ 00:27:18.231 Security Send/Receive: Not Supported 00:27:18.231 Format NVM: Not Supported 00:27:18.231 Firmware Activate/Download: Not Supported 00:27:18.231 Namespace Management: Not Supported 00:27:18.231 Device Self-Test: Not Supported 00:27:18.231 Directives: Not Supported 00:27:18.231 NVMe-MI: Not Supported 00:27:18.231 Virtualization Management: Not Supported 00:27:18.231 Doorbell Buffer Config: Not Supported 00:27:18.231 Get LBA Status Capability: Not Supported 00:27:18.231 Command & Feature Lockdown Capability: Not Supported 00:27:18.231 Abort Command Limit: 1 00:27:18.231 Async Event Request Limit: 4 00:27:18.231 Number of Firmware Slots: N/A 00:27:18.231 Firmware Slot 1 Read-Only: N/A 00:27:18.231 Firmware Activation Without Reset: N/A 00:27:18.231 Multiple Update Detection Support: N/A 00:27:18.231 Firmware Update Granularity: No Information Provided 00:27:18.231 Per-Namespace SMART Log: No 00:27:18.231 Asymmetric Namespace Access Log Page: Not Supported 00:27:18.231 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:18.231 Command Effects Log Page: Not Supported 00:27:18.231 Get Log Page Extended Data: Supported 00:27:18.231 Telemetry Log Pages: Not Supported 00:27:18.231 Persistent Event Log Pages: Not Supported 00:27:18.231 Supported Log Pages Log Page: May Support 00:27:18.231 Commands Supported & Effects Log Page: Not Supported 00:27:18.231 Feature Identifiers & Effects Log Page:May Support 00:27:18.231 NVMe-MI Commands & Effects Log Page: May Support 00:27:18.231 Data Area 4 for Telemetry Log: Not Supported 00:27:18.231 Error Log Page Entries Supported: 128 00:27:18.231 Keep Alive: Not Supported 00:27:18.231 00:27:18.231 NVM Command Set Attributes 00:27:18.231 ========================== 00:27:18.231 Submission Queue Entry Size 00:27:18.231 Max: 1 00:27:18.231 Min: 1 00:27:18.231 Completion Queue Entry Size 00:27:18.231 Max: 1 00:27:18.231 Min: 1 00:27:18.231 Number of Namespaces: 0 00:27:18.231 Compare Command: Not Supported 00:27:18.231 Write Uncorrectable Command: Not Supported 00:27:18.231 Dataset Management Command: Not Supported 00:27:18.231 Write Zeroes Command: Not Supported 00:27:18.231 Set Features Save Field: Not Supported 00:27:18.231 Reservations: Not Supported 00:27:18.231 Timestamp: Not Supported 00:27:18.231 Copy: Not Supported 00:27:18.231 Volatile Write Cache: Not Present 00:27:18.231 Atomic Write Unit (Normal): 1 00:27:18.231 Atomic Write Unit (PFail): 1 00:27:18.231 Atomic Compare & Write Unit: 1 00:27:18.231 Fused Compare & Write: Supported 00:27:18.231 Scatter-Gather List 00:27:18.231 SGL Command Set: Supported 00:27:18.231 SGL Keyed: Supported 00:27:18.231 SGL Bit Bucket Descriptor: Not Supported 00:27:18.231 SGL Metadata Pointer: Not Supported 00:27:18.231 Oversized SGL: Not Supported 00:27:18.231 SGL Metadata Address: Not Supported 00:27:18.231 SGL Offset: Supported 00:27:18.231 Transport SGL Data Block: Not Supported 00:27:18.231 Replay Protected Memory Block: Not Supported 00:27:18.231 00:27:18.231 Firmware Slot Information 00:27:18.231 ========================= 00:27:18.231 Active slot: 0 00:27:18.231 00:27:18.231 00:27:18.231 Error Log 00:27:18.231 ========= 00:27:18.231 00:27:18.231 Active Namespaces 00:27:18.231 ================= 00:27:18.231 Discovery Log Page 00:27:18.231 ================== 00:27:18.231 Generation Counter: 2 00:27:18.231 Number of Records: 2 00:27:18.231 Record Format: 0 00:27:18.231 00:27:18.231 Discovery Log Entry 0 00:27:18.231 ---------------------- 00:27:18.231 Transport Type: 3 (TCP) 00:27:18.231 Address Family: 1 (IPv4) 00:27:18.231 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:18.231 Entry Flags: 00:27:18.231 Duplicate Returned Information: 1 00:27:18.231 Explicit Persistent Connection Support for Discovery: 1 00:27:18.231 Transport Requirements: 00:27:18.231 Secure Channel: Not Required 00:27:18.231 Port ID: 0 (0x0000) 00:27:18.231 Controller ID: 65535 (0xffff) 00:27:18.231 Admin Max SQ Size: 128 00:27:18.231 Transport Service Identifier: 4420 00:27:18.231 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:18.231 Transport Address: 10.0.0.2 00:27:18.231 Discovery Log Entry 1 00:27:18.231 ---------------------- 00:27:18.231 Transport Type: 3 (TCP) 00:27:18.231 Address Family: 1 (IPv4) 00:27:18.231 Subsystem Type: 2 (NVM Subsystem) 00:27:18.231 Entry Flags: 00:27:18.231 Duplicate Returned Information: 0 00:27:18.231 Explicit Persistent Connection Support for Discovery: 0 00:27:18.231 Transport Requirements: 00:27:18.231 Secure Channel: Not Required 00:27:18.231 Port ID: 0 (0x0000) 00:27:18.231 Controller ID: 65535 (0xffff) 00:27:18.231 Admin Max SQ Size: 128 00:27:18.231 Transport Service Identifier: 4420 00:27:18.231 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:18.231 Transport Address: 10.0.0.2 [2024-11-19 10:54:57.339298] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:27:18.232 [2024-11-19 10:54:57.339310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a100) on tqpair=0xe28690 00:27:18.232 [2024-11-19 10:54:57.339318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.232 [2024-11-19 10:54:57.339324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a280) on tqpair=0xe28690 00:27:18.232 [2024-11-19 10:54:57.339329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.232 [2024-11-19 10:54:57.339334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a400) on tqpair=0xe28690 00:27:18.232 [2024-11-19 10:54:57.339339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.232 [2024-11-19 10:54:57.339344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a580) on tqpair=0xe28690 00:27:18.232 [2024-11-19 10:54:57.339348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.232 [2024-11-19 10:54:57.339362] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.339366] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.339370] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe28690) 00:27:18.232 [2024-11-19 10:54:57.339379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.232 [2024-11-19 10:54:57.339394] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a580, cid 3, qid 0 00:27:18.232 [2024-11-19 10:54:57.339613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.232 [2024-11-19 10:54:57.339620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.232 [2024-11-19 10:54:57.339623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.339627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a580) on tqpair=0xe28690 00:27:18.232 [2024-11-19 10:54:57.339635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.339639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.339642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe28690) 00:27:18.232 [2024-11-19 10:54:57.339649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.232 [2024-11-19 10:54:57.339663] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a580, cid 3, qid 0 00:27:18.232 [2024-11-19 10:54:57.339867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.232 [2024-11-19 10:54:57.339874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.232 [2024-11-19 10:54:57.339877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.339881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a580) on tqpair=0xe28690 00:27:18.232 [2024-11-19 10:54:57.339886] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:27:18.232 [2024-11-19 10:54:57.339891] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:27:18.232 [2024-11-19 10:54:57.339900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.339904] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.339908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe28690) 00:27:18.232 [2024-11-19 10:54:57.339917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.232 [2024-11-19 10:54:57.339928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a580, cid 3, qid 0 00:27:18.232 [2024-11-19 10:54:57.340182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.232 [2024-11-19 10:54:57.340189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.232 [2024-11-19 10:54:57.340192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.340196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a580) on tqpair=0xe28690 00:27:18.232 [2024-11-19 10:54:57.340207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.340211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.340214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe28690) 00:27:18.232 [2024-11-19 10:54:57.340221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.232 [2024-11-19 10:54:57.340233] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a580, cid 3, qid 0 00:27:18.232 [2024-11-19 10:54:57.340404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.232 [2024-11-19 10:54:57.340410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.232 [2024-11-19 10:54:57.340413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.340417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a580) on tqpair=0xe28690 00:27:18.232 [2024-11-19 10:54:57.340427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.340431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.340435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe28690) 00:27:18.232 [2024-11-19 10:54:57.340441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.232 [2024-11-19 10:54:57.340452] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a580, cid 3, qid 0 00:27:18.232 [2024-11-19 10:54:57.340620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.232 [2024-11-19 10:54:57.340626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.232 [2024-11-19 10:54:57.340629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.340633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a580) on tqpair=0xe28690 00:27:18.232 [2024-11-19 10:54:57.340643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.340647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.340650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe28690) 00:27:18.232 [2024-11-19 10:54:57.340657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.232 [2024-11-19 10:54:57.340668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a580, cid 3, qid 0 00:27:18.232 [2024-11-19 10:54:57.340835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.232 [2024-11-19 10:54:57.340841] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.232 [2024-11-19 10:54:57.340844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.340848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a580) on tqpair=0xe28690 00:27:18.232 [2024-11-19 10:54:57.340858] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.340862] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.340865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe28690) 00:27:18.232 [2024-11-19 10:54:57.340872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.232 [2024-11-19 10:54:57.340885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a580, cid 3, qid 0 00:27:18.232 [2024-11-19 10:54:57.341093] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.232 [2024-11-19 10:54:57.341099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.232 [2024-11-19 10:54:57.341102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.341106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a580) on tqpair=0xe28690 00:27:18.232 [2024-11-19 10:54:57.341117] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.341120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.341124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe28690) 00:27:18.232 [2024-11-19 10:54:57.341131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.232 [2024-11-19 10:54:57.341141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a580, cid 3, qid 0 00:27:18.232 [2024-11-19 10:54:57.341342] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.232 [2024-11-19 10:54:57.341349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.232 [2024-11-19 10:54:57.341352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.341356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a580) on tqpair=0xe28690 00:27:18.232 [2024-11-19 10:54:57.341366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.341370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.341373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe28690) 00:27:18.232 [2024-11-19 10:54:57.341380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.232 [2024-11-19 10:54:57.341391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a580, cid 3, qid 0 00:27:18.232 [2024-11-19 10:54:57.341588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.232 [2024-11-19 10:54:57.341594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.232 [2024-11-19 10:54:57.341598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.341602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a580) on tqpair=0xe28690 00:27:18.232 [2024-11-19 10:54:57.341612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.341616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.341619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe28690) 00:27:18.232 [2024-11-19 10:54:57.341626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.232 [2024-11-19 10:54:57.341637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a580, cid 3, qid 0 00:27:18.232 [2024-11-19 10:54:57.341807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.232 [2024-11-19 10:54:57.341813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.232 [2024-11-19 10:54:57.341817] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.341820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a580) on tqpair=0xe28690 00:27:18.232 [2024-11-19 10:54:57.341830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.341834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.232 [2024-11-19 10:54:57.341837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe28690) 00:27:18.233 [2024-11-19 10:54:57.341844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.233 [2024-11-19 10:54:57.341854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a580, cid 3, qid 0 00:27:18.233 [2024-11-19 10:54:57.342101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.233 [2024-11-19 10:54:57.342107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.233 [2024-11-19 10:54:57.342110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.233 [2024-11-19 10:54:57.342114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a580) on tqpair=0xe28690 00:27:18.233 [2024-11-19 10:54:57.342124] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.233 [2024-11-19 10:54:57.342128] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.233 [2024-11-19 10:54:57.342131] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe28690) 00:27:18.233 [2024-11-19 10:54:57.342138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.233 [2024-11-19 10:54:57.342148] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a580, cid 3, qid 0 00:27:18.233 [2024-11-19 10:54:57.342367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.233 [2024-11-19 10:54:57.342374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.233 [2024-11-19 10:54:57.342377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.233 [2024-11-19 10:54:57.342381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a580) on tqpair=0xe28690 00:27:18.233 [2024-11-19 10:54:57.342392] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.233 [2024-11-19 10:54:57.342396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.233 [2024-11-19 10:54:57.342399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe28690) 00:27:18.233 [2024-11-19 10:54:57.342406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.233 [2024-11-19 10:54:57.342416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a580, cid 3, qid 0 00:27:18.233 [2024-11-19 10:54:57.342631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.233 [2024-11-19 10:54:57.342637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.233 [2024-11-19 10:54:57.342640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.233 [2024-11-19 10:54:57.342644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a580) on tqpair=0xe28690 00:27:18.233 [2024-11-19 10:54:57.342654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.233 [2024-11-19 10:54:57.342658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.233 [2024-11-19 10:54:57.342661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe28690) 00:27:18.233 [2024-11-19 10:54:57.342668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.233 [2024-11-19 10:54:57.342678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a580, cid 3, qid 0 00:27:18.233 [2024-11-19 10:54:57.342872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.233 [2024-11-19 10:54:57.342879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.233 [2024-11-19 10:54:57.342882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.233 [2024-11-19 10:54:57.342886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a580) on tqpair=0xe28690 00:27:18.233 [2024-11-19 10:54:57.342896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.233 [2024-11-19 10:54:57.342900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.233 [2024-11-19 10:54:57.342903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe28690) 00:27:18.233 [2024-11-19 10:54:57.342910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.233 [2024-11-19 10:54:57.342920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a580, cid 3, qid 0 00:27:18.233 [2024-11-19 10:54:57.347167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.233 [2024-11-19 10:54:57.347178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.233 [2024-11-19 10:54:57.347182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.233 [2024-11-19 10:54:57.347185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a580) on tqpair=0xe28690 00:27:18.233 [2024-11-19 10:54:57.347196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.233 [2024-11-19 10:54:57.347200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.233 [2024-11-19 10:54:57.347204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe28690) 00:27:18.233 [2024-11-19 10:54:57.347211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.233 [2024-11-19 10:54:57.347223] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8a580, cid 3, qid 0 00:27:18.233 [2024-11-19 10:54:57.347405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.233 [2024-11-19 10:54:57.347411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.233 [2024-11-19 10:54:57.347414] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.233 [2024-11-19 10:54:57.347418] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe8a580) on tqpair=0xe28690 00:27:18.233 [2024-11-19 10:54:57.347426] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:27:18.233 00:27:18.233 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:18.233 [2024-11-19 10:54:57.393556] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:27:18.233 [2024-11-19 10:54:57.393607] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1118884 ] 00:27:18.500 [2024-11-19 10:54:57.450690] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:27:18.500 [2024-11-19 10:54:57.450757] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:18.500 [2024-11-19 10:54:57.450762] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:18.500 [2024-11-19 10:54:57.450777] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:18.500 [2024-11-19 10:54:57.450790] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:18.500 [2024-11-19 10:54:57.451483] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:27:18.500 [2024-11-19 10:54:57.451522] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2117690 0 00:27:18.500 [2024-11-19 10:54:57.462179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:18.500 [2024-11-19 10:54:57.462194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:18.500 [2024-11-19 10:54:57.462199] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:18.500 [2024-11-19 10:54:57.462202] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:18.500 [2024-11-19 10:54:57.462236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.462242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.462247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2117690) 00:27:18.500 [2024-11-19 10:54:57.462260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:18.500 [2024-11-19 10:54:57.462286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179100, cid 0, qid 0 00:27:18.500 [2024-11-19 10:54:57.469173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.500 [2024-11-19 10:54:57.469183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.500 [2024-11-19 10:54:57.469187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.469192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179100) on tqpair=0x2117690 00:27:18.500 [2024-11-19 10:54:57.469205] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:18.500 [2024-11-19 10:54:57.469213] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:27:18.500 [2024-11-19 10:54:57.469218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:27:18.500 [2024-11-19 10:54:57.469234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.469238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.469242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2117690) 00:27:18.500 [2024-11-19 10:54:57.469251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.500 [2024-11-19 10:54:57.469267] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179100, cid 0, qid 0 00:27:18.500 [2024-11-19 10:54:57.469444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.500 [2024-11-19 10:54:57.469450] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.500 [2024-11-19 10:54:57.469454] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.469458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179100) on tqpair=0x2117690 00:27:18.500 [2024-11-19 10:54:57.469463] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:27:18.500 [2024-11-19 10:54:57.469471] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:27:18.500 [2024-11-19 10:54:57.469479] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.469483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.469486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2117690) 00:27:18.500 [2024-11-19 10:54:57.469493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.500 [2024-11-19 10:54:57.469504] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179100, cid 0, qid 0 00:27:18.500 [2024-11-19 10:54:57.469730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.500 [2024-11-19 10:54:57.469736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.500 [2024-11-19 10:54:57.469740] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.469744] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179100) on tqpair=0x2117690 00:27:18.500 [2024-11-19 10:54:57.469749] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:27:18.500 [2024-11-19 10:54:57.469758] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:27:18.500 [2024-11-19 10:54:57.469765] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.469769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.469772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2117690) 00:27:18.500 [2024-11-19 10:54:57.469779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.500 [2024-11-19 10:54:57.469789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179100, cid 0, qid 0 00:27:18.500 [2024-11-19 10:54:57.470003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.500 [2024-11-19 10:54:57.470010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.500 [2024-11-19 10:54:57.470014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.470018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179100) on tqpair=0x2117690 00:27:18.500 [2024-11-19 10:54:57.470023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:18.500 [2024-11-19 10:54:57.470033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.470037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.470041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2117690) 00:27:18.500 [2024-11-19 10:54:57.470047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.500 [2024-11-19 10:54:57.470058] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179100, cid 0, qid 0 00:27:18.500 [2024-11-19 10:54:57.470250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.500 [2024-11-19 10:54:57.470257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.500 [2024-11-19 10:54:57.470260] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.470264] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179100) on tqpair=0x2117690 00:27:18.500 [2024-11-19 10:54:57.470269] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:27:18.500 [2024-11-19 10:54:57.470274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:27:18.500 [2024-11-19 10:54:57.470282] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:18.500 [2024-11-19 10:54:57.470392] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:27:18.500 [2024-11-19 10:54:57.470397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:18.500 [2024-11-19 10:54:57.470406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.470409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.470413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2117690) 00:27:18.500 [2024-11-19 10:54:57.470420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.500 [2024-11-19 10:54:57.470431] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179100, cid 0, qid 0 00:27:18.500 [2024-11-19 10:54:57.470637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.500 [2024-11-19 10:54:57.470644] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.500 [2024-11-19 10:54:57.470647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.470651] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179100) on tqpair=0x2117690 00:27:18.500 [2024-11-19 10:54:57.470656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:18.500 [2024-11-19 10:54:57.470666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.470670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.500 [2024-11-19 10:54:57.470674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2117690) 00:27:18.500 [2024-11-19 10:54:57.470680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.500 [2024-11-19 10:54:57.470694] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179100, cid 0, qid 0 00:27:18.501 [2024-11-19 10:54:57.470910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.501 [2024-11-19 10:54:57.470917] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.501 [2024-11-19 10:54:57.470920] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.470924] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179100) on tqpair=0x2117690 00:27:18.501 [2024-11-19 10:54:57.470929] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:18.501 [2024-11-19 10:54:57.470934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:27:18.501 [2024-11-19 10:54:57.470942] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:27:18.501 [2024-11-19 10:54:57.470955] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:27:18.501 [2024-11-19 10:54:57.470965] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.470969] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2117690) 00:27:18.501 [2024-11-19 10:54:57.470975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.501 [2024-11-19 10:54:57.470986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179100, cid 0, qid 0 00:27:18.501 [2024-11-19 10:54:57.471255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:18.501 [2024-11-19 10:54:57.471263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:18.501 [2024-11-19 10:54:57.471266] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.471270] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2117690): datao=0, datal=4096, cccid=0 00:27:18.501 [2024-11-19 10:54:57.471275] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2179100) on tqpair(0x2117690): expected_datao=0, payload_size=4096 00:27:18.501 [2024-11-19 10:54:57.471280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.471295] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.471299] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.512305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.501 [2024-11-19 10:54:57.512315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.501 [2024-11-19 10:54:57.512319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.512323] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179100) on tqpair=0x2117690 00:27:18.501 [2024-11-19 10:54:57.512332] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:27:18.501 [2024-11-19 10:54:57.512337] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:27:18.501 [2024-11-19 10:54:57.512342] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:27:18.501 [2024-11-19 10:54:57.512353] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:27:18.501 [2024-11-19 10:54:57.512358] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:27:18.501 [2024-11-19 10:54:57.512363] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:27:18.501 [2024-11-19 10:54:57.512374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:27:18.501 [2024-11-19 10:54:57.512384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.512388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.512392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2117690) 00:27:18.501 [2024-11-19 10:54:57.512400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:18.501 [2024-11-19 10:54:57.512413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179100, cid 0, qid 0 00:27:18.501 [2024-11-19 10:54:57.512658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.501 [2024-11-19 10:54:57.512664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.501 [2024-11-19 10:54:57.512667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.512671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179100) on tqpair=0x2117690 00:27:18.501 [2024-11-19 10:54:57.512679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.512683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.512686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2117690) 00:27:18.501 [2024-11-19 10:54:57.512693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.501 [2024-11-19 10:54:57.512699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.512703] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.512706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2117690) 00:27:18.501 [2024-11-19 10:54:57.512712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.501 [2024-11-19 10:54:57.512719] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.512722] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.512726] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2117690) 00:27:18.501 [2024-11-19 10:54:57.512732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.501 [2024-11-19 10:54:57.512738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.512741] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.512745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2117690) 00:27:18.501 [2024-11-19 10:54:57.512751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.501 [2024-11-19 10:54:57.512756] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:18.501 [2024-11-19 10:54:57.512764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:18.501 [2024-11-19 10:54:57.512770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.512774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2117690) 00:27:18.501 [2024-11-19 10:54:57.512781] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.501 [2024-11-19 10:54:57.512793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179100, cid 0, qid 0 00:27:18.501 [2024-11-19 10:54:57.512798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179280, cid 1, qid 0 00:27:18.501 [2024-11-19 10:54:57.512803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179400, cid 2, qid 0 00:27:18.501 [2024-11-19 10:54:57.512808] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179580, cid 3, qid 0 00:27:18.501 [2024-11-19 10:54:57.512816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179700, cid 4, qid 0 00:27:18.501 [2024-11-19 10:54:57.513063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.501 [2024-11-19 10:54:57.513069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.501 [2024-11-19 10:54:57.513073] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.513077] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179700) on tqpair=0x2117690 00:27:18.501 [2024-11-19 10:54:57.513084] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:27:18.501 [2024-11-19 10:54:57.513089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:18.501 [2024-11-19 10:54:57.513098] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:27:18.501 [2024-11-19 10:54:57.513105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:18.501 [2024-11-19 10:54:57.513111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.513115] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.513119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2117690) 00:27:18.501 [2024-11-19 10:54:57.513125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:18.501 [2024-11-19 10:54:57.513136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179700, cid 4, qid 0 00:27:18.501 [2024-11-19 10:54:57.517173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.501 [2024-11-19 10:54:57.517181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.501 [2024-11-19 10:54:57.517185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.517189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179700) on tqpair=0x2117690 00:27:18.501 [2024-11-19 10:54:57.517260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:27:18.501 [2024-11-19 10:54:57.517271] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:18.501 [2024-11-19 10:54:57.517279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.517283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2117690) 00:27:18.501 [2024-11-19 10:54:57.517290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.501 [2024-11-19 10:54:57.517302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179700, cid 4, qid 0 00:27:18.501 [2024-11-19 10:54:57.517493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:18.501 [2024-11-19 10:54:57.517500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:18.501 [2024-11-19 10:54:57.517504] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.517508] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2117690): datao=0, datal=4096, cccid=4 00:27:18.501 [2024-11-19 10:54:57.517512] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2179700) on tqpair(0x2117690): expected_datao=0, payload_size=4096 00:27:18.501 [2024-11-19 10:54:57.517517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.517524] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.517528] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.517680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.501 [2024-11-19 10:54:57.517689] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.501 [2024-11-19 10:54:57.517692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.517696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179700) on tqpair=0x2117690 00:27:18.501 [2024-11-19 10:54:57.517706] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:27:18.501 [2024-11-19 10:54:57.517716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:27:18.501 [2024-11-19 10:54:57.517726] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:27:18.501 [2024-11-19 10:54:57.517733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.501 [2024-11-19 10:54:57.517737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2117690) 00:27:18.502 [2024-11-19 10:54:57.517743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.502 [2024-11-19 10:54:57.517754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179700, cid 4, qid 0 00:27:18.502 [2024-11-19 10:54:57.517992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:18.502 [2024-11-19 10:54:57.517998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:18.502 [2024-11-19 10:54:57.518002] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.518006] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2117690): datao=0, datal=4096, cccid=4 00:27:18.502 [2024-11-19 10:54:57.518010] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2179700) on tqpair(0x2117690): expected_datao=0, payload_size=4096 00:27:18.502 [2024-11-19 10:54:57.518014] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.518021] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.518025] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.518235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.502 [2024-11-19 10:54:57.518242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.502 [2024-11-19 10:54:57.518246] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.518249] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179700) on tqpair=0x2117690 00:27:18.502 [2024-11-19 10:54:57.518263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:18.502 [2024-11-19 10:54:57.518273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:18.502 [2024-11-19 10:54:57.518280] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.518284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2117690) 00:27:18.502 [2024-11-19 10:54:57.518290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.502 [2024-11-19 10:54:57.518302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179700, cid 4, qid 0 00:27:18.502 [2024-11-19 10:54:57.518487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:18.502 [2024-11-19 10:54:57.518493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:18.502 [2024-11-19 10:54:57.518497] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.518501] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2117690): datao=0, datal=4096, cccid=4 00:27:18.502 [2024-11-19 10:54:57.518505] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2179700) on tqpair(0x2117690): expected_datao=0, payload_size=4096 00:27:18.502 [2024-11-19 10:54:57.518510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.518519] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.518523] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.518686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.502 [2024-11-19 10:54:57.518692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.502 [2024-11-19 10:54:57.518696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.518700] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179700) on tqpair=0x2117690 00:27:18.502 [2024-11-19 10:54:57.518707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:18.502 [2024-11-19 10:54:57.518715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:27:18.502 [2024-11-19 10:54:57.518724] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:27:18.502 [2024-11-19 10:54:57.518731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:18.502 [2024-11-19 10:54:57.518736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:18.502 [2024-11-19 10:54:57.518742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:27:18.502 [2024-11-19 10:54:57.518747] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:27:18.502 [2024-11-19 10:54:57.518751] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:27:18.502 [2024-11-19 10:54:57.518757] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:27:18.502 [2024-11-19 10:54:57.518774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.518778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2117690) 00:27:18.502 [2024-11-19 10:54:57.518785] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.502 [2024-11-19 10:54:57.518792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.518796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.518799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2117690) 00:27:18.502 [2024-11-19 10:54:57.518805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.502 [2024-11-19 10:54:57.518820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179700, cid 4, qid 0 00:27:18.502 [2024-11-19 10:54:57.518825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179880, cid 5, qid 0 00:27:18.502 [2024-11-19 10:54:57.519042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.502 [2024-11-19 10:54:57.519049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.502 [2024-11-19 10:54:57.519052] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.519056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179700) on tqpair=0x2117690 00:27:18.502 [2024-11-19 10:54:57.519063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.502 [2024-11-19 10:54:57.519069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.502 [2024-11-19 10:54:57.519072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.519076] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179880) on tqpair=0x2117690 00:27:18.502 [2024-11-19 10:54:57.519085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.519092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2117690) 00:27:18.502 [2024-11-19 10:54:57.519098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.502 [2024-11-19 10:54:57.519109] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179880, cid 5, qid 0 00:27:18.502 [2024-11-19 10:54:57.519292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.502 [2024-11-19 10:54:57.519299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.502 [2024-11-19 10:54:57.519302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.519306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179880) on tqpair=0x2117690 00:27:18.502 [2024-11-19 10:54:57.519315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.519319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2117690) 00:27:18.502 [2024-11-19 10:54:57.519326] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.502 [2024-11-19 10:54:57.519336] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179880, cid 5, qid 0 00:27:18.502 [2024-11-19 10:54:57.519532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.502 [2024-11-19 10:54:57.519538] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.502 [2024-11-19 10:54:57.519541] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.519545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179880) on tqpair=0x2117690 00:27:18.502 [2024-11-19 10:54:57.519555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.519558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2117690) 00:27:18.502 [2024-11-19 10:54:57.519565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.502 [2024-11-19 10:54:57.519575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179880, cid 5, qid 0 00:27:18.502 [2024-11-19 10:54:57.519797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.502 [2024-11-19 10:54:57.519804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.502 [2024-11-19 10:54:57.519807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.519811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179880) on tqpair=0x2117690 00:27:18.502 [2024-11-19 10:54:57.519827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.519831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2117690) 00:27:18.502 [2024-11-19 10:54:57.519838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.502 [2024-11-19 10:54:57.519846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.519849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2117690) 00:27:18.502 [2024-11-19 10:54:57.519855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.502 [2024-11-19 10:54:57.519863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.519867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2117690) 00:27:18.502 [2024-11-19 10:54:57.519873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.502 [2024-11-19 10:54:57.519880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.502 [2024-11-19 10:54:57.519886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2117690) 00:27:18.502 [2024-11-19 10:54:57.519892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.502 [2024-11-19 10:54:57.519904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179880, cid 5, qid 0 00:27:18.502 [2024-11-19 10:54:57.519909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179700, cid 4, qid 0 00:27:18.502 [2024-11-19 10:54:57.519914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179a00, cid 6, qid 0 00:27:18.503 [2024-11-19 10:54:57.519919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179b80, cid 7, qid 0 00:27:18.503 [2024-11-19 10:54:57.520253] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:18.503 [2024-11-19 10:54:57.520260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:18.503 [2024-11-19 10:54:57.520263] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520267] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2117690): datao=0, datal=8192, cccid=5 00:27:18.503 [2024-11-19 10:54:57.520271] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2179880) on tqpair(0x2117690): expected_datao=0, payload_size=8192 00:27:18.503 [2024-11-19 10:54:57.520276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520316] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520320] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:18.503 [2024-11-19 10:54:57.520332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:18.503 [2024-11-19 10:54:57.520335] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520339] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2117690): datao=0, datal=512, cccid=4 00:27:18.503 [2024-11-19 10:54:57.520343] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2179700) on tqpair(0x2117690): expected_datao=0, payload_size=512 00:27:18.503 [2024-11-19 10:54:57.520348] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520354] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520358] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:18.503 [2024-11-19 10:54:57.520369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:18.503 [2024-11-19 10:54:57.520373] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520376] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2117690): datao=0, datal=512, cccid=6 00:27:18.503 [2024-11-19 10:54:57.520381] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2179a00) on tqpair(0x2117690): expected_datao=0, payload_size=512 00:27:18.503 [2024-11-19 10:54:57.520385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520391] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520395] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520400] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:18.503 [2024-11-19 10:54:57.520406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:18.503 [2024-11-19 10:54:57.520410] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520413] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2117690): datao=0, datal=4096, cccid=7 00:27:18.503 [2024-11-19 10:54:57.520418] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2179b80) on tqpair(0x2117690): expected_datao=0, payload_size=4096 00:27:18.503 [2024-11-19 10:54:57.520422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520443] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520448] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.503 [2024-11-19 10:54:57.520650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.503 [2024-11-19 10:54:57.520654] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179880) on tqpair=0x2117690 00:27:18.503 [2024-11-19 10:54:57.520670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.503 [2024-11-19 10:54:57.520676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.503 [2024-11-19 10:54:57.520679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179700) on tqpair=0x2117690 00:27:18.503 [2024-11-19 10:54:57.520694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.503 [2024-11-19 10:54:57.520700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.503 [2024-11-19 10:54:57.520704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179a00) on tqpair=0x2117690 00:27:18.503 [2024-11-19 10:54:57.520715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.503 [2024-11-19 10:54:57.520720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.503 [2024-11-19 10:54:57.520724] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.503 [2024-11-19 10:54:57.520728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179b80) on tqpair=0x2117690 00:27:18.503 ===================================================== 00:27:18.503 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:18.503 ===================================================== 00:27:18.503 Controller Capabilities/Features 00:27:18.503 ================================ 00:27:18.503 Vendor ID: 8086 00:27:18.503 Subsystem Vendor ID: 8086 00:27:18.503 Serial Number: SPDK00000000000001 00:27:18.503 Model Number: SPDK bdev Controller 00:27:18.503 Firmware Version: 25.01 00:27:18.503 Recommended Arb Burst: 6 00:27:18.503 IEEE OUI Identifier: e4 d2 5c 00:27:18.503 Multi-path I/O 00:27:18.503 May have multiple subsystem ports: Yes 00:27:18.503 May have multiple controllers: Yes 00:27:18.503 Associated with SR-IOV VF: No 00:27:18.503 Max Data Transfer Size: 131072 00:27:18.503 Max Number of Namespaces: 32 00:27:18.503 Max Number of I/O Queues: 127 00:27:18.503 NVMe Specification Version (VS): 1.3 00:27:18.503 NVMe Specification Version (Identify): 1.3 00:27:18.503 Maximum Queue Entries: 128 00:27:18.503 Contiguous Queues Required: Yes 00:27:18.503 Arbitration Mechanisms Supported 00:27:18.503 Weighted Round Robin: Not Supported 00:27:18.503 Vendor Specific: Not Supported 00:27:18.503 Reset Timeout: 15000 ms 00:27:18.503 Doorbell Stride: 4 bytes 00:27:18.503 NVM Subsystem Reset: Not Supported 00:27:18.503 Command Sets Supported 00:27:18.503 NVM Command Set: Supported 00:27:18.503 Boot Partition: Not Supported 00:27:18.503 Memory Page Size Minimum: 4096 bytes 00:27:18.503 Memory Page Size Maximum: 4096 bytes 00:27:18.503 Persistent Memory Region: Not Supported 00:27:18.503 Optional Asynchronous Events Supported 00:27:18.503 Namespace Attribute Notices: Supported 00:27:18.503 Firmware Activation Notices: Not Supported 00:27:18.503 ANA Change Notices: Not Supported 00:27:18.503 PLE Aggregate Log Change Notices: Not Supported 00:27:18.503 LBA Status Info Alert Notices: Not Supported 00:27:18.503 EGE Aggregate Log Change Notices: Not Supported 00:27:18.503 Normal NVM Subsystem Shutdown event: Not Supported 00:27:18.503 Zone Descriptor Change Notices: Not Supported 00:27:18.503 Discovery Log Change Notices: Not Supported 00:27:18.503 Controller Attributes 00:27:18.503 128-bit Host Identifier: Supported 00:27:18.503 Non-Operational Permissive Mode: Not Supported 00:27:18.503 NVM Sets: Not Supported 00:27:18.503 Read Recovery Levels: Not Supported 00:27:18.503 Endurance Groups: Not Supported 00:27:18.503 Predictable Latency Mode: Not Supported 00:27:18.503 Traffic Based Keep ALive: Not Supported 00:27:18.503 Namespace Granularity: Not Supported 00:27:18.503 SQ Associations: Not Supported 00:27:18.503 UUID List: Not Supported 00:27:18.503 Multi-Domain Subsystem: Not Supported 00:27:18.503 Fixed Capacity Management: Not Supported 00:27:18.503 Variable Capacity Management: Not Supported 00:27:18.503 Delete Endurance Group: Not Supported 00:27:18.503 Delete NVM Set: Not Supported 00:27:18.503 Extended LBA Formats Supported: Not Supported 00:27:18.503 Flexible Data Placement Supported: Not Supported 00:27:18.503 00:27:18.503 Controller Memory Buffer Support 00:27:18.503 ================================ 00:27:18.503 Supported: No 00:27:18.503 00:27:18.503 Persistent Memory Region Support 00:27:18.503 ================================ 00:27:18.503 Supported: No 00:27:18.503 00:27:18.503 Admin Command Set Attributes 00:27:18.503 ============================ 00:27:18.503 Security Send/Receive: Not Supported 00:27:18.503 Format NVM: Not Supported 00:27:18.503 Firmware Activate/Download: Not Supported 00:27:18.503 Namespace Management: Not Supported 00:27:18.503 Device Self-Test: Not Supported 00:27:18.503 Directives: Not Supported 00:27:18.503 NVMe-MI: Not Supported 00:27:18.503 Virtualization Management: Not Supported 00:27:18.503 Doorbell Buffer Config: Not Supported 00:27:18.503 Get LBA Status Capability: Not Supported 00:27:18.503 Command & Feature Lockdown Capability: Not Supported 00:27:18.503 Abort Command Limit: 4 00:27:18.503 Async Event Request Limit: 4 00:27:18.503 Number of Firmware Slots: N/A 00:27:18.503 Firmware Slot 1 Read-Only: N/A 00:27:18.503 Firmware Activation Without Reset: N/A 00:27:18.503 Multiple Update Detection Support: N/A 00:27:18.503 Firmware Update Granularity: No Information Provided 00:27:18.503 Per-Namespace SMART Log: No 00:27:18.503 Asymmetric Namespace Access Log Page: Not Supported 00:27:18.503 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:18.503 Command Effects Log Page: Supported 00:27:18.503 Get Log Page Extended Data: Supported 00:27:18.503 Telemetry Log Pages: Not Supported 00:27:18.503 Persistent Event Log Pages: Not Supported 00:27:18.503 Supported Log Pages Log Page: May Support 00:27:18.503 Commands Supported & Effects Log Page: Not Supported 00:27:18.503 Feature Identifiers & Effects Log Page:May Support 00:27:18.504 NVMe-MI Commands & Effects Log Page: May Support 00:27:18.504 Data Area 4 for Telemetry Log: Not Supported 00:27:18.504 Error Log Page Entries Supported: 128 00:27:18.504 Keep Alive: Supported 00:27:18.504 Keep Alive Granularity: 10000 ms 00:27:18.504 00:27:18.504 NVM Command Set Attributes 00:27:18.504 ========================== 00:27:18.504 Submission Queue Entry Size 00:27:18.504 Max: 64 00:27:18.504 Min: 64 00:27:18.504 Completion Queue Entry Size 00:27:18.504 Max: 16 00:27:18.504 Min: 16 00:27:18.504 Number of Namespaces: 32 00:27:18.504 Compare Command: Supported 00:27:18.504 Write Uncorrectable Command: Not Supported 00:27:18.504 Dataset Management Command: Supported 00:27:18.504 Write Zeroes Command: Supported 00:27:18.504 Set Features Save Field: Not Supported 00:27:18.504 Reservations: Supported 00:27:18.504 Timestamp: Not Supported 00:27:18.504 Copy: Supported 00:27:18.504 Volatile Write Cache: Present 00:27:18.504 Atomic Write Unit (Normal): 1 00:27:18.504 Atomic Write Unit (PFail): 1 00:27:18.504 Atomic Compare & Write Unit: 1 00:27:18.504 Fused Compare & Write: Supported 00:27:18.504 Scatter-Gather List 00:27:18.504 SGL Command Set: Supported 00:27:18.504 SGL Keyed: Supported 00:27:18.504 SGL Bit Bucket Descriptor: Not Supported 00:27:18.504 SGL Metadata Pointer: Not Supported 00:27:18.504 Oversized SGL: Not Supported 00:27:18.504 SGL Metadata Address: Not Supported 00:27:18.504 SGL Offset: Supported 00:27:18.504 Transport SGL Data Block: Not Supported 00:27:18.504 Replay Protected Memory Block: Not Supported 00:27:18.504 00:27:18.504 Firmware Slot Information 00:27:18.504 ========================= 00:27:18.504 Active slot: 1 00:27:18.504 Slot 1 Firmware Revision: 25.01 00:27:18.504 00:27:18.504 00:27:18.504 Commands Supported and Effects 00:27:18.504 ============================== 00:27:18.504 Admin Commands 00:27:18.504 -------------- 00:27:18.504 Get Log Page (02h): Supported 00:27:18.504 Identify (06h): Supported 00:27:18.504 Abort (08h): Supported 00:27:18.504 Set Features (09h): Supported 00:27:18.504 Get Features (0Ah): Supported 00:27:18.504 Asynchronous Event Request (0Ch): Supported 00:27:18.504 Keep Alive (18h): Supported 00:27:18.504 I/O Commands 00:27:18.504 ------------ 00:27:18.504 Flush (00h): Supported LBA-Change 00:27:18.504 Write (01h): Supported LBA-Change 00:27:18.504 Read (02h): Supported 00:27:18.504 Compare (05h): Supported 00:27:18.504 Write Zeroes (08h): Supported LBA-Change 00:27:18.504 Dataset Management (09h): Supported LBA-Change 00:27:18.504 Copy (19h): Supported LBA-Change 00:27:18.504 00:27:18.504 Error Log 00:27:18.504 ========= 00:27:18.504 00:27:18.504 Arbitration 00:27:18.504 =========== 00:27:18.504 Arbitration Burst: 1 00:27:18.504 00:27:18.504 Power Management 00:27:18.504 ================ 00:27:18.504 Number of Power States: 1 00:27:18.504 Current Power State: Power State #0 00:27:18.504 Power State #0: 00:27:18.504 Max Power: 0.00 W 00:27:18.504 Non-Operational State: Operational 00:27:18.504 Entry Latency: Not Reported 00:27:18.504 Exit Latency: Not Reported 00:27:18.504 Relative Read Throughput: 0 00:27:18.504 Relative Read Latency: 0 00:27:18.504 Relative Write Throughput: 0 00:27:18.504 Relative Write Latency: 0 00:27:18.504 Idle Power: Not Reported 00:27:18.504 Active Power: Not Reported 00:27:18.504 Non-Operational Permissive Mode: Not Supported 00:27:18.504 00:27:18.504 Health Information 00:27:18.504 ================== 00:27:18.504 Critical Warnings: 00:27:18.504 Available Spare Space: OK 00:27:18.504 Temperature: OK 00:27:18.504 Device Reliability: OK 00:27:18.504 Read Only: No 00:27:18.504 Volatile Memory Backup: OK 00:27:18.504 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:18.504 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:18.504 Available Spare: 0% 00:27:18.504 Available Spare Threshold: 0% 00:27:18.504 Life Percentage Used:[2024-11-19 10:54:57.520828] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.504 [2024-11-19 10:54:57.520834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2117690) 00:27:18.504 [2024-11-19 10:54:57.520841] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.504 [2024-11-19 10:54:57.520852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179b80, cid 7, qid 0 00:27:18.504 [2024-11-19 10:54:57.521069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.504 [2024-11-19 10:54:57.521077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.504 [2024-11-19 10:54:57.521080] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.504 [2024-11-19 10:54:57.521084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179b80) on tqpair=0x2117690 00:27:18.504 [2024-11-19 10:54:57.521115] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:27:18.504 [2024-11-19 10:54:57.521126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179100) on tqpair=0x2117690 00:27:18.504 [2024-11-19 10:54:57.521133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.504 [2024-11-19 10:54:57.521138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179280) on tqpair=0x2117690 00:27:18.504 [2024-11-19 10:54:57.521143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.504 [2024-11-19 10:54:57.521148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179400) on tqpair=0x2117690 00:27:18.504 [2024-11-19 10:54:57.521152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.504 [2024-11-19 10:54:57.525165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179580) on tqpair=0x2117690 00:27:18.504 [2024-11-19 10:54:57.525172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.504 [2024-11-19 10:54:57.525181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.504 [2024-11-19 10:54:57.525187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.504 [2024-11-19 10:54:57.525191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2117690) 00:27:18.504 [2024-11-19 10:54:57.525198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.504 [2024-11-19 10:54:57.525212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179580, cid 3, qid 0 00:27:18.504 [2024-11-19 10:54:57.525430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.504 [2024-11-19 10:54:57.525437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.504 [2024-11-19 10:54:57.525440] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.525444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179580) on tqpair=0x2117690 00:27:18.505 [2024-11-19 10:54:57.525451] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.525455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.525459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2117690) 00:27:18.505 [2024-11-19 10:54:57.525465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.505 [2024-11-19 10:54:57.525479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179580, cid 3, qid 0 00:27:18.505 [2024-11-19 10:54:57.525680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.505 [2024-11-19 10:54:57.525686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.505 [2024-11-19 10:54:57.525690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.525694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179580) on tqpair=0x2117690 00:27:18.505 [2024-11-19 10:54:57.525699] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:27:18.505 [2024-11-19 10:54:57.525704] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:27:18.505 [2024-11-19 10:54:57.525713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.525717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.525721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2117690) 00:27:18.505 [2024-11-19 10:54:57.525727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.505 [2024-11-19 10:54:57.525738] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179580, cid 3, qid 0 00:27:18.505 [2024-11-19 10:54:57.525948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.505 [2024-11-19 10:54:57.525954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.505 [2024-11-19 10:54:57.525958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.525962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179580) on tqpair=0x2117690 00:27:18.505 [2024-11-19 10:54:57.525972] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.525976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.525980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2117690) 00:27:18.505 [2024-11-19 10:54:57.525986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.505 [2024-11-19 10:54:57.525996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179580, cid 3, qid 0 00:27:18.505 [2024-11-19 10:54:57.526235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.505 [2024-11-19 10:54:57.526242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.505 [2024-11-19 10:54:57.526246] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.526250] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179580) on tqpair=0x2117690 00:27:18.505 [2024-11-19 10:54:57.526264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.526268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.526272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2117690) 00:27:18.505 [2024-11-19 10:54:57.526279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.505 [2024-11-19 10:54:57.526289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179580, cid 3, qid 0 00:27:18.505 [2024-11-19 10:54:57.526536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.505 [2024-11-19 10:54:57.526543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.505 [2024-11-19 10:54:57.526546] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.526550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179580) on tqpair=0x2117690 00:27:18.505 [2024-11-19 10:54:57.526560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.526564] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.526568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2117690) 00:27:18.505 [2024-11-19 10:54:57.526574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.505 [2024-11-19 10:54:57.526584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179580, cid 3, qid 0 00:27:18.505 [2024-11-19 10:54:57.526790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.505 [2024-11-19 10:54:57.526797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.505 [2024-11-19 10:54:57.526800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.526804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179580) on tqpair=0x2117690 00:27:18.505 [2024-11-19 10:54:57.526814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.526818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.526822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2117690) 00:27:18.505 [2024-11-19 10:54:57.526828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.505 [2024-11-19 10:54:57.526838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179580, cid 3, qid 0 00:27:18.505 [2024-11-19 10:54:57.527049] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.505 [2024-11-19 10:54:57.527055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.505 [2024-11-19 10:54:57.527059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.527063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179580) on tqpair=0x2117690 00:27:18.505 [2024-11-19 10:54:57.527072] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.527076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.527080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2117690) 00:27:18.505 [2024-11-19 10:54:57.527087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.505 [2024-11-19 10:54:57.527096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179580, cid 3, qid 0 00:27:18.505 [2024-11-19 10:54:57.527294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.505 [2024-11-19 10:54:57.527301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.505 [2024-11-19 10:54:57.527304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.527308] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179580) on tqpair=0x2117690 00:27:18.505 [2024-11-19 10:54:57.527318] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.527324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.527328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2117690) 00:27:18.505 [2024-11-19 10:54:57.527335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.505 [2024-11-19 10:54:57.527345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179580, cid 3, qid 0 00:27:18.505 [2024-11-19 10:54:57.527597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.505 [2024-11-19 10:54:57.527603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.505 [2024-11-19 10:54:57.527607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.527611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179580) on tqpair=0x2117690 00:27:18.505 [2024-11-19 10:54:57.527620] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.527624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.527628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2117690) 00:27:18.505 [2024-11-19 10:54:57.527635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.505 [2024-11-19 10:54:57.527645] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179580, cid 3, qid 0 00:27:18.505 [2024-11-19 10:54:57.527846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.505 [2024-11-19 10:54:57.527853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.505 [2024-11-19 10:54:57.527856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.527860] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179580) on tqpair=0x2117690 00:27:18.505 [2024-11-19 10:54:57.527870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.527874] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.527878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2117690) 00:27:18.505 [2024-11-19 10:54:57.527884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.505 [2024-11-19 10:54:57.527895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179580, cid 3, qid 0 00:27:18.505 [2024-11-19 10:54:57.528086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.505 [2024-11-19 10:54:57.528092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.505 [2024-11-19 10:54:57.528095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.528099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179580) on tqpair=0x2117690 00:27:18.505 [2024-11-19 10:54:57.528109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.528113] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.528117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2117690) 00:27:18.505 [2024-11-19 10:54:57.528123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.505 [2024-11-19 10:54:57.528133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179580, cid 3, qid 0 00:27:18.505 [2024-11-19 10:54:57.528401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.505 [2024-11-19 10:54:57.528407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.505 [2024-11-19 10:54:57.528411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.528415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179580) on tqpair=0x2117690 00:27:18.505 [2024-11-19 10:54:57.528425] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.528429] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.528435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2117690) 00:27:18.505 [2024-11-19 10:54:57.528441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.505 [2024-11-19 10:54:57.528452] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179580, cid 3, qid 0 00:27:18.505 [2024-11-19 10:54:57.528703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.505 [2024-11-19 10:54:57.528710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.505 [2024-11-19 10:54:57.528713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.528717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179580) on tqpair=0x2117690 00:27:18.505 [2024-11-19 10:54:57.528728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.505 [2024-11-19 10:54:57.528732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.506 [2024-11-19 10:54:57.528735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2117690) 00:27:18.506 [2024-11-19 10:54:57.528742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.506 [2024-11-19 10:54:57.528752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179580, cid 3, qid 0 00:27:18.506 [2024-11-19 10:54:57.528956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.506 [2024-11-19 10:54:57.528963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.506 [2024-11-19 10:54:57.528966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.506 [2024-11-19 10:54:57.528970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179580) on tqpair=0x2117690 00:27:18.506 [2024-11-19 10:54:57.528980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.506 [2024-11-19 10:54:57.528984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.506 [2024-11-19 10:54:57.528987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2117690) 00:27:18.506 [2024-11-19 10:54:57.528994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.506 [2024-11-19 10:54:57.529004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179580, cid 3, qid 0 00:27:18.506 [2024-11-19 10:54:57.533168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.506 [2024-11-19 10:54:57.533176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.506 [2024-11-19 10:54:57.533180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.506 [2024-11-19 10:54:57.533184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179580) on tqpair=0x2117690 00:27:18.506 [2024-11-19 10:54:57.533194] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.506 [2024-11-19 10:54:57.533198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.506 [2024-11-19 10:54:57.533202] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2117690) 00:27:18.506 [2024-11-19 10:54:57.533209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.506 [2024-11-19 10:54:57.533220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2179580, cid 3, qid 0 00:27:18.506 [2024-11-19 10:54:57.533453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.506 [2024-11-19 10:54:57.533459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.506 [2024-11-19 10:54:57.533463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.506 [2024-11-19 10:54:57.533467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2179580) on tqpair=0x2117690 00:27:18.506 [2024-11-19 10:54:57.533475] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:27:18.506 0% 00:27:18.506 Data Units Read: 0 00:27:18.506 Data Units Written: 0 00:27:18.506 Host Read Commands: 0 00:27:18.506 Host Write Commands: 0 00:27:18.506 Controller Busy Time: 0 minutes 00:27:18.506 Power Cycles: 0 00:27:18.506 Power On Hours: 0 hours 00:27:18.506 Unsafe Shutdowns: 0 00:27:18.506 Unrecoverable Media Errors: 0 00:27:18.506 Lifetime Error Log Entries: 0 00:27:18.506 Warning Temperature Time: 0 minutes 00:27:18.506 Critical Temperature Time: 0 minutes 00:27:18.506 00:27:18.506 Number of Queues 00:27:18.506 ================ 00:27:18.506 Number of I/O Submission Queues: 127 00:27:18.506 Number of I/O Completion Queues: 127 00:27:18.506 00:27:18.506 Active Namespaces 00:27:18.506 ================= 00:27:18.506 Namespace ID:1 00:27:18.506 Error Recovery Timeout: Unlimited 00:27:18.506 Command Set Identifier: NVM (00h) 00:27:18.506 Deallocate: Supported 00:27:18.506 Deallocated/Unwritten Error: Not Supported 00:27:18.506 Deallocated Read Value: Unknown 00:27:18.506 Deallocate in Write Zeroes: Not Supported 00:27:18.506 Deallocated Guard Field: 0xFFFF 00:27:18.506 Flush: Supported 00:27:18.506 Reservation: Supported 00:27:18.506 Namespace Sharing Capabilities: Multiple Controllers 00:27:18.506 Size (in LBAs): 131072 (0GiB) 00:27:18.506 Capacity (in LBAs): 131072 (0GiB) 00:27:18.506 Utilization (in LBAs): 131072 (0GiB) 00:27:18.506 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:18.506 EUI64: ABCDEF0123456789 00:27:18.506 UUID: 921537c3-257e-4352-9c35-4d079e6ade23 00:27:18.506 Thin Provisioning: Not Supported 00:27:18.506 Per-NS Atomic Units: Yes 00:27:18.506 Atomic Boundary Size (Normal): 0 00:27:18.506 Atomic Boundary Size (PFail): 0 00:27:18.506 Atomic Boundary Offset: 0 00:27:18.506 Maximum Single Source Range Length: 65535 00:27:18.506 Maximum Copy Length: 65535 00:27:18.506 Maximum Source Range Count: 1 00:27:18.506 NGUID/EUI64 Never Reused: No 00:27:18.506 Namespace Write Protected: No 00:27:18.506 Number of LBA Formats: 1 00:27:18.506 Current LBA Format: LBA Format #00 00:27:18.506 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:18.506 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:18.506 rmmod nvme_tcp 00:27:18.506 rmmod nvme_fabrics 00:27:18.506 rmmod nvme_keyring 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1118531 ']' 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1118531 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1118531 ']' 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1118531 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:18.506 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1118531 00:27:18.768 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:18.768 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:18.768 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1118531' 00:27:18.768 killing process with pid 1118531 00:27:18.768 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1118531 00:27:18.768 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1118531 00:27:18.768 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:18.768 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:18.768 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:18.768 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:27:18.768 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:27:18.768 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:18.768 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:27:18.768 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:18.768 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:18.768 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.768 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.768 10:54:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.323 10:54:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:21.323 00:27:21.323 real 0m11.668s 00:27:21.323 user 0m8.704s 00:27:21.323 sys 0m6.176s 00:27:21.323 10:54:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:21.323 10:54:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:21.323 ************************************ 00:27:21.323 END TEST nvmf_identify 00:27:21.323 ************************************ 00:27:21.323 10:55:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.324 ************************************ 00:27:21.324 START TEST nvmf_perf 00:27:21.324 ************************************ 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:21.324 * Looking for test storage... 00:27:21.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:21.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.324 --rc genhtml_branch_coverage=1 00:27:21.324 --rc genhtml_function_coverage=1 00:27:21.324 --rc genhtml_legend=1 00:27:21.324 --rc geninfo_all_blocks=1 00:27:21.324 --rc geninfo_unexecuted_blocks=1 00:27:21.324 00:27:21.324 ' 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:21.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.324 --rc genhtml_branch_coverage=1 00:27:21.324 --rc genhtml_function_coverage=1 00:27:21.324 --rc genhtml_legend=1 00:27:21.324 --rc geninfo_all_blocks=1 00:27:21.324 --rc geninfo_unexecuted_blocks=1 00:27:21.324 00:27:21.324 ' 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:21.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.324 --rc genhtml_branch_coverage=1 00:27:21.324 --rc genhtml_function_coverage=1 00:27:21.324 --rc genhtml_legend=1 00:27:21.324 --rc geninfo_all_blocks=1 00:27:21.324 --rc geninfo_unexecuted_blocks=1 00:27:21.324 00:27:21.324 ' 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:21.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.324 --rc genhtml_branch_coverage=1 00:27:21.324 --rc genhtml_function_coverage=1 00:27:21.324 --rc genhtml_legend=1 00:27:21.324 --rc geninfo_all_blocks=1 00:27:21.324 --rc geninfo_unexecuted_blocks=1 00:27:21.324 00:27:21.324 ' 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.324 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:21.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:21.325 10:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:29.473 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:29.473 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:29.473 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:29.473 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:29.473 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:29.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:27:29.474 00:27:29.474 --- 10.0.0.2 ping statistics --- 00:27:29.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.474 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:29.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:27:29.474 00:27:29.474 --- 10.0.0.1 ping statistics --- 00:27:29.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.474 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1123202 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1123202 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1123202 ']' 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:29.474 10:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:29.474 [2024-11-19 10:55:07.902537] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:27:29.474 [2024-11-19 10:55:07.902609] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.474 [2024-11-19 10:55:08.003711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:29.474 [2024-11-19 10:55:08.056002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.474 [2024-11-19 10:55:08.056056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.474 [2024-11-19 10:55:08.056065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.474 [2024-11-19 10:55:08.056073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.474 [2024-11-19 10:55:08.056079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.474 [2024-11-19 10:55:08.058487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.474 [2024-11-19 10:55:08.058652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:29.474 [2024-11-19 10:55:08.058813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.474 [2024-11-19 10:55:08.058814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:29.735 10:55:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:29.735 10:55:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:27:29.735 10:55:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:29.735 10:55:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:29.735 10:55:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:29.735 10:55:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:29.735 10:55:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:29.735 10:55:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:30.307 10:55:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:30.307 10:55:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:30.307 10:55:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:27:30.307 10:55:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:30.567 10:55:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:30.567 10:55:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:27:30.567 10:55:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:30.567 10:55:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:30.567 10:55:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:30.829 [2024-11-19 10:55:09.888239] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.829 10:55:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:31.089 10:55:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:31.089 10:55:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:31.350 10:55:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:31.350 10:55:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:31.350 10:55:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.611 [2024-11-19 10:55:10.675534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.611 10:55:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:31.872 10:55:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:27:31.872 10:55:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:27:31.872 10:55:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:31.872 10:55:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:27:33.258 Initializing NVMe Controllers 00:27:33.258 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:27:33.258 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:27:33.258 Initialization complete. Launching workers. 00:27:33.258 ======================================================== 00:27:33.258 Latency(us) 00:27:33.258 Device Information : IOPS MiB/s Average min max 00:27:33.258 PCIE (0000:65:00.0) NSID 1 from core 0: 77711.36 303.56 411.18 13.28 5215.86 00:27:33.258 ======================================================== 00:27:33.258 Total : 77711.36 303.56 411.18 13.28 5215.86 00:27:33.258 00:27:33.258 10:55:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:34.644 Initializing NVMe Controllers 00:27:34.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:34.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:34.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:34.644 Initialization complete. Launching workers. 00:27:34.644 ======================================================== 00:27:34.644 Latency(us) 00:27:34.644 Device Information : IOPS MiB/s Average min max 00:27:34.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 64.00 0.25 15637.38 230.70 46009.25 00:27:34.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 57.00 0.22 17799.52 7956.04 48887.88 00:27:34.644 ======================================================== 00:27:34.644 Total : 121.00 0.47 16655.91 230.70 48887.88 00:27:34.644 00:27:34.644 10:55:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:35.586 Initializing NVMe Controllers 00:27:35.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:35.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:35.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:35.586 Initialization complete. Launching workers. 00:27:35.586 ======================================================== 00:27:35.586 Latency(us) 00:27:35.586 Device Information : IOPS MiB/s Average min max 00:27:35.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11877.20 46.40 2696.46 488.66 6271.85 00:27:35.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3663.75 14.31 8781.00 7178.50 16282.94 00:27:35.586 ======================================================== 00:27:35.586 Total : 15540.96 60.71 4130.88 488.66 16282.94 00:27:35.586 00:27:35.846 10:55:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:35.847 10:55:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:35.847 10:55:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:38.390 Initializing NVMe Controllers 00:27:38.390 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:38.390 Controller IO queue size 128, less than required. 00:27:38.390 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:38.390 Controller IO queue size 128, less than required. 00:27:38.390 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:38.390 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:38.390 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:38.390 Initialization complete. Launching workers. 00:27:38.390 ======================================================== 00:27:38.390 Latency(us) 00:27:38.390 Device Information : IOPS MiB/s Average min max 00:27:38.390 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1846.68 461.67 69915.43 35314.96 118571.42 00:27:38.390 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 602.74 150.69 227793.37 63760.61 357103.86 00:27:38.390 ======================================================== 00:27:38.390 Total : 2449.42 612.35 108765.32 35314.96 357103.86 00:27:38.390 00:27:38.390 10:55:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:38.390 No valid NVMe controllers or AIO or URING devices found 00:27:38.390 Initializing NVMe Controllers 00:27:38.390 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:38.390 Controller IO queue size 128, less than required. 00:27:38.390 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:38.390 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:38.390 Controller IO queue size 128, less than required. 00:27:38.390 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:38.390 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:38.390 WARNING: Some requested NVMe devices were skipped 00:27:38.390 10:55:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:40.935 Initializing NVMe Controllers 00:27:40.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:40.935 Controller IO queue size 128, less than required. 00:27:40.935 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:40.935 Controller IO queue size 128, less than required. 00:27:40.935 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:40.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:40.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:40.935 Initialization complete. Launching workers. 00:27:40.935 00:27:40.935 ==================== 00:27:40.935 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:40.935 TCP transport: 00:27:40.935 polls: 36275 00:27:40.935 idle_polls: 22360 00:27:40.935 sock_completions: 13915 00:27:40.935 nvme_completions: 7693 00:27:40.935 submitted_requests: 11560 00:27:40.935 queued_requests: 1 00:27:40.935 00:27:40.935 ==================== 00:27:40.935 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:40.935 TCP transport: 00:27:40.935 polls: 37259 00:27:40.935 idle_polls: 23203 00:27:40.935 sock_completions: 14056 00:27:40.935 nvme_completions: 7301 00:27:40.935 submitted_requests: 10944 00:27:40.935 queued_requests: 1 00:27:40.935 ======================================================== 00:27:40.936 Latency(us) 00:27:40.936 Device Information : IOPS MiB/s Average min max 00:27:40.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1922.22 480.56 67404.72 41399.27 117864.30 00:27:40.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1824.26 456.07 71428.45 29735.08 129391.60 00:27:40.936 ======================================================== 00:27:40.936 Total : 3746.48 936.62 69363.98 29735.08 129391.60 00:27:40.936 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:41.196 rmmod nvme_tcp 00:27:41.196 rmmod nvme_fabrics 00:27:41.196 rmmod nvme_keyring 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1123202 ']' 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1123202 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1123202 ']' 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1123202 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:41.196 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1123202 00:27:41.456 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:41.456 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:41.456 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1123202' 00:27:41.456 killing process with pid 1123202 00:27:41.456 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1123202 00:27:41.456 10:55:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1123202 00:27:43.369 10:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:43.369 10:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:43.369 10:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:43.369 10:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:27:43.369 10:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:27:43.369 10:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:43.369 10:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:27:43.369 10:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:43.369 10:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:43.370 10:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.370 10:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:43.370 10:55:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.283 10:55:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:45.543 00:27:45.543 real 0m24.419s 00:27:45.543 user 0m58.970s 00:27:45.543 sys 0m8.600s 00:27:45.543 10:55:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:45.543 10:55:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:45.543 ************************************ 00:27:45.543 END TEST nvmf_perf 00:27:45.543 ************************************ 00:27:45.543 10:55:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:45.543 10:55:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:45.543 10:55:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:45.543 10:55:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.543 ************************************ 00:27:45.543 START TEST nvmf_fio_host 00:27:45.543 ************************************ 00:27:45.543 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:45.543 * Looking for test storage... 00:27:45.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:45.543 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:45.543 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:45.543 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:45.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.805 --rc genhtml_branch_coverage=1 00:27:45.805 --rc genhtml_function_coverage=1 00:27:45.805 --rc genhtml_legend=1 00:27:45.805 --rc geninfo_all_blocks=1 00:27:45.805 --rc geninfo_unexecuted_blocks=1 00:27:45.805 00:27:45.805 ' 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:45.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.805 --rc genhtml_branch_coverage=1 00:27:45.805 --rc genhtml_function_coverage=1 00:27:45.805 --rc genhtml_legend=1 00:27:45.805 --rc geninfo_all_blocks=1 00:27:45.805 --rc geninfo_unexecuted_blocks=1 00:27:45.805 00:27:45.805 ' 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:45.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.805 --rc genhtml_branch_coverage=1 00:27:45.805 --rc genhtml_function_coverage=1 00:27:45.805 --rc genhtml_legend=1 00:27:45.805 --rc geninfo_all_blocks=1 00:27:45.805 --rc geninfo_unexecuted_blocks=1 00:27:45.805 00:27:45.805 ' 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:45.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.805 --rc genhtml_branch_coverage=1 00:27:45.805 --rc genhtml_function_coverage=1 00:27:45.805 --rc genhtml_legend=1 00:27:45.805 --rc geninfo_all_blocks=1 00:27:45.805 --rc geninfo_unexecuted_blocks=1 00:27:45.805 00:27:45.805 ' 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.805 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:45.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:45.806 10:55:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:53.948 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:53.948 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:53.948 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.948 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:53.949 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:53.949 10:55:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:53.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:53.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:27:53.949 00:27:53.949 --- 10.0.0.2 ping statistics --- 00:27:53.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.949 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:53.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:53.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:27:53.949 00:27:53.949 --- 10.0.0.1 ping statistics --- 00:27:53.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.949 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1130111 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1130111 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1130111 ']' 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:53.949 10:55:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.949 [2024-11-19 10:55:32.304624] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:27:53.949 [2024-11-19 10:55:32.304699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.949 [2024-11-19 10:55:32.405607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:53.949 [2024-11-19 10:55:32.458726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.949 [2024-11-19 10:55:32.458778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.949 [2024-11-19 10:55:32.458787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.949 [2024-11-19 10:55:32.458794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.949 [2024-11-19 10:55:32.458802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.949 [2024-11-19 10:55:32.461213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.949 [2024-11-19 10:55:32.461319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:53.949 [2024-11-19 10:55:32.461461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:53.949 [2024-11-19 10:55:32.461463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.949 10:55:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:53.949 10:55:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:27:53.949 10:55:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:54.210 [2024-11-19 10:55:33.290588] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.210 10:55:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:54.210 10:55:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:54.210 10:55:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.210 10:55:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:54.472 Malloc1 00:27:54.472 10:55:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:54.733 10:55:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:54.995 10:55:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:54.995 [2024-11-19 10:55:34.147957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.995 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:55.256 10:55:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:55.826 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:55.826 fio-3.35 00:27:55.826 Starting 1 thread 00:27:58.372 00:27:58.372 test: (groupid=0, jobs=1): err= 0: pid=1130799: Tue Nov 19 10:55:37 2024 00:27:58.372 read: IOPS=11.0k, BW=42.8MiB/s (44.9MB/s)(85.8MiB/2004msec) 00:27:58.372 slat (usec): min=2, max=299, avg= 2.18, stdev= 2.89 00:27:58.372 clat (usec): min=3774, max=9006, avg=6464.55, stdev=1199.35 00:27:58.372 lat (usec): min=3809, max=9008, avg=6466.73, stdev=1199.37 00:27:58.372 clat percentiles (usec): 00:27:58.372 | 1.00th=[ 4424], 5.00th=[ 4686], 10.00th=[ 4883], 20.00th=[ 5080], 00:27:58.372 | 30.00th=[ 5276], 40.00th=[ 6194], 50.00th=[ 6915], 60.00th=[ 7177], 00:27:58.372 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7832], 95.00th=[ 8029], 00:27:58.372 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[ 8848], 99.95th=[ 8848], 00:27:58.372 | 99.99th=[ 8979] 00:27:58.372 bw ( KiB/s): min=37904, max=55608, per=99.84%, avg=43764.00, stdev=8153.59, samples=4 00:27:58.372 iops : min= 9476, max=13902, avg=10941.00, stdev=2038.40, samples=4 00:27:58.372 write: IOPS=10.9k, BW=42.7MiB/s (44.8MB/s)(85.6MiB/2004msec); 0 zone resets 00:27:58.372 slat (usec): min=2, max=274, avg= 2.25, stdev= 2.03 00:27:58.372 clat (usec): min=2906, max=7767, avg=5195.81, stdev=949.59 00:27:58.372 lat (usec): min=2924, max=7769, avg=5198.06, stdev=949.63 00:27:58.372 clat percentiles (usec): 00:27:58.372 | 1.00th=[ 3589], 5.00th=[ 3818], 10.00th=[ 3949], 20.00th=[ 4113], 00:27:58.372 | 30.00th=[ 4293], 40.00th=[ 4948], 50.00th=[ 5538], 60.00th=[ 5735], 00:27:58.372 | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6325], 95.00th=[ 6456], 00:27:58.372 | 99.00th=[ 6783], 99.50th=[ 6849], 99.90th=[ 7111], 99.95th=[ 7242], 00:27:58.372 | 99.99th=[ 7504] 00:27:58.372 bw ( KiB/s): min=38336, max=55568, per=99.98%, avg=43716.00, stdev=8131.37, samples=4 00:27:58.372 iops : min= 9584, max=13892, avg=10929.00, stdev=2032.84, samples=4 00:27:58.372 lat (msec) : 4=6.72%, 10=93.28% 00:27:58.372 cpu : usr=73.14%, sys=25.86%, ctx=61, majf=0, minf=17 00:27:58.372 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:58.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:58.372 issued rwts: total=21961,21906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.372 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:58.372 00:27:58.372 Run status group 0 (all jobs): 00:27:58.372 READ: bw=42.8MiB/s (44.9MB/s), 42.8MiB/s-42.8MiB/s (44.9MB/s-44.9MB/s), io=85.8MiB (90.0MB), run=2004-2004msec 00:27:58.372 WRITE: bw=42.7MiB/s (44.8MB/s), 42.7MiB/s-42.7MiB/s (44.8MB/s-44.8MB/s), io=85.6MiB (89.7MB), run=2004-2004msec 00:27:58.372 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:58.373 10:55:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:58.373 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:58.373 fio-3.35 00:27:58.373 Starting 1 thread 00:28:01.111 00:28:01.111 test: (groupid=0, jobs=1): err= 0: pid=1131561: Tue Nov 19 10:55:39 2024 00:28:01.111 read: IOPS=9732, BW=152MiB/s (159MB/s)(305MiB/2005msec) 00:28:01.111 slat (usec): min=3, max=114, avg= 3.58, stdev= 1.56 00:28:01.111 clat (usec): min=1218, max=16679, avg=8038.17, stdev=1907.59 00:28:01.111 lat (usec): min=1222, max=16683, avg=8041.75, stdev=1907.70 00:28:01.111 clat percentiles (usec): 00:28:01.111 | 1.00th=[ 3982], 5.00th=[ 5145], 10.00th=[ 5735], 20.00th=[ 6325], 00:28:01.111 | 30.00th=[ 6849], 40.00th=[ 7373], 50.00th=[ 7963], 60.00th=[ 8455], 00:28:01.111 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[10945], 00:28:01.111 | 99.00th=[12518], 99.50th=[12911], 99.90th=[13698], 99.95th=[14091], 00:28:01.111 | 99.99th=[14353] 00:28:01.111 bw ( KiB/s): min=74144, max=84224, per=49.43%, avg=76976.00, stdev=4867.68, samples=4 00:28:01.111 iops : min= 4634, max= 5264, avg=4811.00, stdev=304.23, samples=4 00:28:01.111 write: IOPS=5801, BW=90.7MiB/s (95.1MB/s)(157MiB/1736msec); 0 zone resets 00:28:01.111 slat (usec): min=39, max=326, avg=40.83, stdev= 6.78 00:28:01.111 clat (usec): min=2025, max=13936, avg=8939.67, stdev=1357.89 00:28:01.111 lat (usec): min=2065, max=14073, avg=8980.49, stdev=1359.31 00:28:01.111 clat percentiles (usec): 00:28:01.111 | 1.00th=[ 5866], 5.00th=[ 7111], 10.00th=[ 7439], 20.00th=[ 7832], 00:28:01.111 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9110], 00:28:01.111 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10683], 95.00th=[11207], 00:28:01.111 | 99.00th=[12387], 99.50th=[13042], 99.90th=[13435], 99.95th=[13566], 00:28:01.111 | 99.99th=[13829] 00:28:01.111 bw ( KiB/s): min=76576, max=87616, per=86.45%, avg=80248.00, stdev=5004.18, samples=4 00:28:01.111 iops : min= 4786, max= 5476, avg=5015.50, stdev=312.76, samples=4 00:28:01.111 lat (msec) : 2=0.08%, 4=0.71%, 10=80.11%, 20=19.09% 00:28:01.111 cpu : usr=86.33%, sys=12.57%, ctx=11, majf=0, minf=33 00:28:01.111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:28:01.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:01.111 issued rwts: total=19513,10072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:01.111 00:28:01.111 Run status group 0 (all jobs): 00:28:01.111 READ: bw=152MiB/s (159MB/s), 152MiB/s-152MiB/s (159MB/s-159MB/s), io=305MiB (320MB), run=2005-2005msec 00:28:01.111 WRITE: bw=90.7MiB/s (95.1MB/s), 90.7MiB/s-90.7MiB/s (95.1MB/s-95.1MB/s), io=157MiB (165MB), run=1736-1736msec 00:28:01.111 10:55:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:01.111 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:01.112 rmmod nvme_tcp 00:28:01.112 rmmod nvme_fabrics 00:28:01.112 rmmod nvme_keyring 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1130111 ']' 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1130111 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1130111 ']' 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1130111 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1130111 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1130111' 00:28:01.112 killing process with pid 1130111 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1130111 00:28:01.112 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1130111 00:28:01.373 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:01.373 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:01.373 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:01.373 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:28:01.373 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:28:01.373 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:01.373 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:01.373 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:01.373 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:01.373 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.373 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:01.373 10:55:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.287 10:55:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:03.287 00:28:03.287 real 0m17.878s 00:28:03.287 user 1m2.966s 00:28:03.287 sys 0m7.769s 00:28:03.287 10:55:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:03.287 10:55:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.287 ************************************ 00:28:03.287 END TEST nvmf_fio_host 00:28:03.287 ************************************ 00:28:03.287 10:55:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:03.287 10:55:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:03.287 10:55:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:03.287 10:55:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.549 ************************************ 00:28:03.549 START TEST nvmf_failover 00:28:03.549 ************************************ 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:03.549 * Looking for test storage... 00:28:03.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:03.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.549 --rc genhtml_branch_coverage=1 00:28:03.549 --rc genhtml_function_coverage=1 00:28:03.549 --rc genhtml_legend=1 00:28:03.549 --rc geninfo_all_blocks=1 00:28:03.549 --rc geninfo_unexecuted_blocks=1 00:28:03.549 00:28:03.549 ' 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:03.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.549 --rc genhtml_branch_coverage=1 00:28:03.549 --rc genhtml_function_coverage=1 00:28:03.549 --rc genhtml_legend=1 00:28:03.549 --rc geninfo_all_blocks=1 00:28:03.549 --rc geninfo_unexecuted_blocks=1 00:28:03.549 00:28:03.549 ' 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:03.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.549 --rc genhtml_branch_coverage=1 00:28:03.549 --rc genhtml_function_coverage=1 00:28:03.549 --rc genhtml_legend=1 00:28:03.549 --rc geninfo_all_blocks=1 00:28:03.549 --rc geninfo_unexecuted_blocks=1 00:28:03.549 00:28:03.549 ' 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:03.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.549 --rc genhtml_branch_coverage=1 00:28:03.549 --rc genhtml_function_coverage=1 00:28:03.549 --rc genhtml_legend=1 00:28:03.549 --rc geninfo_all_blocks=1 00:28:03.549 --rc geninfo_unexecuted_blocks=1 00:28:03.549 00:28:03.549 ' 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.549 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:03.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.811 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.812 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:03.812 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:03.812 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:28:03.812 10:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:11.955 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:11.955 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.955 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:11.956 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:11.956 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.956 10:55:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:11.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:11.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:28:11.956 00:28:11.956 --- 10.0.0.2 ping statistics --- 00:28:11.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.956 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:11.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:11.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:28:11.956 00:28:11.956 --- 10.0.0.1 ping statistics --- 00:28:11.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.956 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1136235 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1136235 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1136235 ']' 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:11.956 10:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:11.956 [2024-11-19 10:55:50.398888] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:28:11.956 [2024-11-19 10:55:50.398958] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.956 [2024-11-19 10:55:50.499558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:11.956 [2024-11-19 10:55:50.552362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.956 [2024-11-19 10:55:50.552414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.956 [2024-11-19 10:55:50.552424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:11.956 [2024-11-19 10:55:50.552431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:11.956 [2024-11-19 10:55:50.552437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.956 [2024-11-19 10:55:50.554503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:11.956 [2024-11-19 10:55:50.554664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.956 [2024-11-19 10:55:50.554666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.217 10:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.217 10:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:28:12.217 10:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:12.217 10:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:12.217 10:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:12.217 10:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.217 10:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:12.479 [2024-11-19 10:55:51.439104] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.479 10:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:12.479 Malloc0 00:28:12.740 10:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:12.740 10:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:13.001 10:55:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:13.263 [2024-11-19 10:55:52.258350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.263 10:55:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:13.263 [2024-11-19 10:55:52.454969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:13.524 10:55:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:13.524 [2024-11-19 10:55:52.647628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:13.524 10:55:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:13.524 10:55:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1136662 00:28:13.524 10:55:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:13.524 10:55:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1136662 /var/tmp/bdevperf.sock 00:28:13.524 10:55:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1136662 ']' 00:28:13.525 10:55:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:13.525 10:55:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.525 10:55:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:13.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:13.525 10:55:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.525 10:55:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:14.467 10:55:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.467 10:55:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:28:14.467 10:55:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:14.727 NVMe0n1 00:28:14.727 10:55:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:14.988 00:28:14.988 10:55:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1137001 00:28:14.989 10:55:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:14.989 10:55:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:28:16.374 10:55:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:16.374 [2024-11-19 10:55:55.334550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.374 [2024-11-19 10:55:55.334896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.334999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.335004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.335008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.335013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 [2024-11-19 10:55:55.335018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10844f0 is same with the state(6) to be set 00:28:16.375 10:55:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:28:19.703 10:55:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:19.703 00:28:19.703 10:55:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:19.703 [2024-11-19 10:55:58.827271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 [2024-11-19 10:55:58.827482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(6) to be set 00:28:19.703 10:55:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:28:22.999 10:56:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:22.999 [2024-11-19 10:56:02.019472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.999 10:56:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:28:23.939 10:56:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:24.200 [2024-11-19 10:56:03.211049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.200 [2024-11-19 10:56:03.211088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.200 [2024-11-19 10:56:03.211094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.200 [2024-11-19 10:56:03.211099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.200 [2024-11-19 10:56:03.211104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.200 [2024-11-19 10:56:03.211109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.200 [2024-11-19 10:56:03.211114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.200 [2024-11-19 10:56:03.211118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.200 [2024-11-19 10:56:03.211123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.200 [2024-11-19 10:56:03.211127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.200 [2024-11-19 10:56:03.211132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.200 [2024-11-19 10:56:03.211136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.200 [2024-11-19 10:56:03.211140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.200 [2024-11-19 10:56:03.211145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.200 [2024-11-19 10:56:03.211150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 [2024-11-19 10:56:03.211322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a4c0 is same with the state(6) to be set 00:28:24.201 10:56:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1137001 00:28:30.801 { 00:28:30.801 "results": [ 00:28:30.801 { 00:28:30.801 "job": "NVMe0n1", 00:28:30.801 "core_mask": "0x1", 00:28:30.801 "workload": "verify", 00:28:30.801 "status": "finished", 00:28:30.801 "verify_range": { 00:28:30.801 "start": 0, 00:28:30.801 "length": 16384 00:28:30.801 }, 00:28:30.801 "queue_depth": 128, 00:28:30.801 "io_size": 4096, 00:28:30.801 "runtime": 15.008939, 00:28:30.801 "iops": 12368.762375541668, 00:28:30.801 "mibps": 48.31547802945964, 00:28:30.801 "io_failed": 13060, 00:28:30.801 "io_timeout": 0, 00:28:30.801 "avg_latency_us": 9647.673496995501, 00:28:30.801 "min_latency_us": 546.1333333333333, 00:28:30.801 "max_latency_us": 21189.97333333333 00:28:30.801 } 00:28:30.801 ], 00:28:30.801 "core_count": 1 00:28:30.801 } 00:28:30.801 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1136662 00:28:30.801 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1136662 ']' 00:28:30.801 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1136662 00:28:30.801 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:28:30.801 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.801 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1136662 00:28:30.801 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:30.801 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:30.801 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1136662' 00:28:30.801 killing process with pid 1136662 00:28:30.801 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1136662 00:28:30.801 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1136662 00:28:30.801 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:30.801 [2024-11-19 10:55:52.727547] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:28:30.801 [2024-11-19 10:55:52.727625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136662 ] 00:28:30.801 [2024-11-19 10:55:52.820846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.801 [2024-11-19 10:55:52.872864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.801 Running I/O for 15 seconds... 00:28:30.801 11099.00 IOPS, 43.36 MiB/s [2024-11-19T09:56:09.996Z] [2024-11-19 10:55:55.337083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.801 [2024-11-19 10:55:55.337555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.801 [2024-11-19 10:55:55.337565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.337989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.337999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.338006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.338016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.338023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.338033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.338041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.338050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.338058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.338067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.338074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.338083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.338091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.338100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.338107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.338117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.338124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.338133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.338141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.338150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.338161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.338171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.338178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.338187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.338196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.338205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.338212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.338222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.802 [2024-11-19 10:55:55.338229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.802 [2024-11-19 10:55:55.338240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.802 [2024-11-19 10:55:55.338250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.803 [2024-11-19 10:55:55.338894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.803 [2024-11-19 10:55:55.338901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.338910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.804 [2024-11-19 10:55:55.338917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.338926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.804 [2024-11-19 10:55:55.338933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.338943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.804 [2024-11-19 10:55:55.338951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.338960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.804 [2024-11-19 10:55:55.338967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.338976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.804 [2024-11-19 10:55:55.338983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.338993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.804 [2024-11-19 10:55:55.339001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.339010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.804 [2024-11-19 10:55:55.339017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.339026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.804 [2024-11-19 10:55:55.339033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.339057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.804 [2024-11-19 10:55:55.339066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96408 len:8 PRP1 0x0 PRP2 0x0 00:28:30.804 [2024-11-19 10:55:55.339074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.339084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.804 [2024-11-19 10:55:55.339089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.804 [2024-11-19 10:55:55.339095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96416 len:8 PRP1 0x0 PRP2 0x0 00:28:30.804 [2024-11-19 10:55:55.339103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.339110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.804 [2024-11-19 10:55:55.339116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.804 [2024-11-19 10:55:55.339122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96424 len:8 PRP1 0x0 PRP2 0x0 00:28:30.804 [2024-11-19 10:55:55.339129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.339136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.804 [2024-11-19 10:55:55.339142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.804 [2024-11-19 10:55:55.339148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96432 len:8 PRP1 0x0 PRP2 0x0 00:28:30.804 [2024-11-19 10:55:55.339155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.339167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.804 [2024-11-19 10:55:55.339173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.804 [2024-11-19 10:55:55.339179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96440 len:8 PRP1 0x0 PRP2 0x0 00:28:30.804 [2024-11-19 10:55:55.339186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.339194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.804 [2024-11-19 10:55:55.339199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.804 [2024-11-19 10:55:55.339205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96448 len:8 PRP1 0x0 PRP2 0x0 00:28:30.804 [2024-11-19 10:55:55.339212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.339220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.804 [2024-11-19 10:55:55.339226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.804 [2024-11-19 10:55:55.339232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96456 len:8 PRP1 0x0 PRP2 0x0 00:28:30.804 [2024-11-19 10:55:55.339239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.339246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.804 [2024-11-19 10:55:55.339251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.804 [2024-11-19 10:55:55.339257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96464 len:8 PRP1 0x0 PRP2 0x0 00:28:30.804 [2024-11-19 10:55:55.339267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.339275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.804 [2024-11-19 10:55:55.339280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.804 [2024-11-19 10:55:55.339287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96472 len:8 PRP1 0x0 PRP2 0x0 00:28:30.804 [2024-11-19 10:55:55.339294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.339302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.804 [2024-11-19 10:55:55.339307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.804 [2024-11-19 10:55:55.339314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96480 len:8 PRP1 0x0 PRP2 0x0 00:28:30.804 [2024-11-19 10:55:55.339320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.339329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.804 [2024-11-19 10:55:55.339334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.804 [2024-11-19 10:55:55.339340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96488 len:8 PRP1 0x0 PRP2 0x0 00:28:30.804 [2024-11-19 10:55:55.339347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.339355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.804 [2024-11-19 10:55:55.339360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.804 [2024-11-19 10:55:55.339366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96496 len:8 PRP1 0x0 PRP2 0x0 00:28:30.804 [2024-11-19 10:55:55.339373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.339381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.804 [2024-11-19 10:55:55.339386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.804 [2024-11-19 10:55:55.339392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96504 len:8 PRP1 0x0 PRP2 0x0 00:28:30.804 [2024-11-19 10:55:55.339399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.339407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.804 [2024-11-19 10:55:55.339413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.804 [2024-11-19 10:55:55.339419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96512 len:8 PRP1 0x0 PRP2 0x0 00:28:30.804 [2024-11-19 10:55:55.339426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.350610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.804 [2024-11-19 10:55:55.350639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.804 [2024-11-19 10:55:55.350649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96520 len:8 PRP1 0x0 PRP2 0x0 00:28:30.804 [2024-11-19 10:55:55.350659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.350666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.804 [2024-11-19 10:55:55.350672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.804 [2024-11-19 10:55:55.350683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96528 len:8 PRP1 0x0 PRP2 0x0 00:28:30.804 [2024-11-19 10:55:55.350691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.350737] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:30.804 [2024-11-19 10:55:55.350769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.804 [2024-11-19 10:55:55.350778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.350788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.804 [2024-11-19 10:55:55.350795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.804 [2024-11-19 10:55:55.350803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.804 [2024-11-19 10:55:55.350811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:55.350819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.805 [2024-11-19 10:55:55.350826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:55.350834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:30.805 [2024-11-19 10:55:55.350879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b3d70 (9): Bad file descriptor 00:28:30.805 [2024-11-19 10:55:55.354431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:30.805 [2024-11-19 10:55:55.385213] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:28:30.805 10931.50 IOPS, 42.70 MiB/s [2024-11-19T09:56:10.000Z] 11105.67 IOPS, 43.38 MiB/s [2024-11-19T09:56:10.000Z] 11510.50 IOPS, 44.96 MiB/s [2024-11-19T09:56:10.000Z] [2024-11-19 10:55:58.828639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.805 [2024-11-19 10:55:58.828670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.805 [2024-11-19 10:55:58.828683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.805 [2024-11-19 10:55:58.828695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.805 [2024-11-19 10:55:58.828706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b3d70 is same with the state(6) to be set 00:28:30.805 [2024-11-19 10:55:58.828763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.828771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.828798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.828810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.828822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.828833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.828845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.828857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.828868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.828880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.828892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.828904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.828916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.828928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.828941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.828953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.828965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.828976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.828988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.828995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.829000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.829006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.829011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.829017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.829023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.829029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.829035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.829041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.829046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.829053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.829058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.829065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.829069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.829076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.829080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.805 [2024-11-19 10:55:58.829089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.805 [2024-11-19 10:55:58.829095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.806 [2024-11-19 10:55:58.829519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.806 [2024-11-19 10:55:58.829525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.807 [2024-11-19 10:55:58.829754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.807 [2024-11-19 10:55:58.829766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.807 [2024-11-19 10:55:58.829778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.807 [2024-11-19 10:55:58.829789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.807 [2024-11-19 10:55:58.829801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.807 [2024-11-19 10:55:58.829813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.807 [2024-11-19 10:55:58.829824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.807 [2024-11-19 10:55:58.829835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.807 [2024-11-19 10:55:58.829848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.807 [2024-11-19 10:55:58.829859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.807 [2024-11-19 10:55:58.829871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.807 [2024-11-19 10:55:58.829878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.829883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.829890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.829895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.829901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.829907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.829914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.829919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.829925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.829930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.829937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.829942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.829948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.829953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.829960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.829965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.829971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.829979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.829985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.829991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.829998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.808 [2024-11-19 10:55:58.830043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.808 [2024-11-19 10:55:58.830257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.808 [2024-11-19 10:55:58.830262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:55:58.830268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:55:58.830273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:55:58.830280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:55:58.830286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:55:58.830293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:55:58.830299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:55:58.830305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:55:58.830311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:55:58.830329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.809 [2024-11-19 10:55:58.830334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.809 [2024-11-19 10:55:58.830338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44432 len:8 PRP1 0x0 PRP2 0x0 00:28:30.809 [2024-11-19 10:55:58.830344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:55:58.830377] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:28:30.809 [2024-11-19 10:55:58.830384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:28:30.809 [2024-11-19 10:55:58.832822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:30.809 [2024-11-19 10:55:58.832844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b3d70 (9): Bad file descriptor 00:28:30.809 [2024-11-19 10:55:58.975730] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:28:30.809 11411.60 IOPS, 44.58 MiB/s [2024-11-19T09:56:10.004Z] 11677.33 IOPS, 45.61 MiB/s [2024-11-19T09:56:10.004Z] 11889.14 IOPS, 46.44 MiB/s [2024-11-19T09:56:10.004Z] 12010.00 IOPS, 46.91 MiB/s [2024-11-19T09:56:10.004Z] [2024-11-19 10:56:03.212175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.809 [2024-11-19 10:56:03.212564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.809 [2024-11-19 10:56:03.212569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.810 [2024-11-19 10:56:03.212824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.810 [2024-11-19 10:56:03.212836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.810 [2024-11-19 10:56:03.212848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.810 [2024-11-19 10:56:03.212859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.810 [2024-11-19 10:56:03.212870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.810 [2024-11-19 10:56:03.212882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.810 [2024-11-19 10:56:03.212893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.810 [2024-11-19 10:56:03.212905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.810 [2024-11-19 10:56:03.212916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.810 [2024-11-19 10:56:03.212929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.810 [2024-11-19 10:56:03.212935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.810 [2024-11-19 10:56:03.212940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.212947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.212952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.212958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.212963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.212970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.212975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.212981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.212986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.212993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.212999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.811 [2024-11-19 10:56:03.213391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.811 [2024-11-19 10:56:03.213415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:8 PRP1 0x0 PRP2 0x0 00:28:30.811 [2024-11-19 10:56:03.213420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.811 [2024-11-19 10:56:03.213428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.213437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23432 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.213442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.213448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.213457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23440 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.213462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.213467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.213475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23448 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.213480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.213486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.213494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.213499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.213504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.213513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23464 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.213518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.213523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.213533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23472 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.213538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.213543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.213552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23480 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.213557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.213563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.213571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.213576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.213582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.213590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23496 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.213595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.213600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.213608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23504 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.213613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.213619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.213627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23512 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.213632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.213637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.213645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.213650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.213655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.213664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23528 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.213669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.213675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.213684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23536 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.213689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.213695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.213703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23544 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.213708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.213714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.213722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.213727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.213733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.213741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23560 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.213746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.213751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.213761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23568 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.213766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.213772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.213780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23576 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.213785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.213790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.213794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.225381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.225408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.225422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.225426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.225431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23592 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.225437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.225446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.225450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.225455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22984 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.225460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.225466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.812 [2024-11-19 10:56:03.225470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.812 [2024-11-19 10:56:03.225474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22992 len:8 PRP1 0x0 PRP2 0x0 00:28:30.812 [2024-11-19 10:56:03.225479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.812 [2024-11-19 10:56:03.225485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.813 [2024-11-19 10:56:03.225490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.813 [2024-11-19 10:56:03.225494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23000 len:8 PRP1 0x0 PRP2 0x0 00:28:30.813 [2024-11-19 10:56:03.225499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.813 [2024-11-19 10:56:03.225504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.813 [2024-11-19 10:56:03.225508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.813 [2024-11-19 10:56:03.225512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23008 len:8 PRP1 0x0 PRP2 0x0 00:28:30.813 [2024-11-19 10:56:03.225517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.813 [2024-11-19 10:56:03.225523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.813 [2024-11-19 10:56:03.225527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.813 [2024-11-19 10:56:03.225531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23016 len:8 PRP1 0x0 PRP2 0x0 00:28:30.813 [2024-11-19 10:56:03.225536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.813 [2024-11-19 10:56:03.225542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.813 [2024-11-19 10:56:03.225546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.813 [2024-11-19 10:56:03.225550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23024 len:8 PRP1 0x0 PRP2 0x0 00:28:30.813 [2024-11-19 10:56:03.225555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.813 [2024-11-19 10:56:03.225560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.813 [2024-11-19 10:56:03.225565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.813 [2024-11-19 10:56:03.225569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23032 len:8 PRP1 0x0 PRP2 0x0 00:28:30.813 [2024-11-19 10:56:03.225574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.813 [2024-11-19 10:56:03.225611] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:28:30.813 [2024-11-19 10:56:03.225637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.813 [2024-11-19 10:56:03.225645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.813 [2024-11-19 10:56:03.225653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.813 [2024-11-19 10:56:03.225658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.813 [2024-11-19 10:56:03.225664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.813 [2024-11-19 10:56:03.225669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.813 [2024-11-19 10:56:03.225675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.813 [2024-11-19 10:56:03.225682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.813 [2024-11-19 10:56:03.225688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:28:30.813 [2024-11-19 10:56:03.225722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b3d70 (9): Bad file descriptor 00:28:30.813 [2024-11-19 10:56:03.228200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:28:30.813 12018.67 IOPS, 46.95 MiB/s [2024-11-19T09:56:10.008Z] [2024-11-19 10:56:03.296986] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:28:30.813 12094.80 IOPS, 47.25 MiB/s [2024-11-19T09:56:10.008Z] 12157.00 IOPS, 47.49 MiB/s [2024-11-19T09:56:10.008Z] 12221.08 IOPS, 47.74 MiB/s [2024-11-19T09:56:10.008Z] 12282.15 IOPS, 47.98 MiB/s [2024-11-19T09:56:10.008Z] 12317.50 IOPS, 48.12 MiB/s 00:28:30.813 Latency(us) 00:28:30.813 [2024-11-19T09:56:10.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.813 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.813 Verification LBA range: start 0x0 length 0x4000 00:28:30.813 NVMe0n1 : 15.01 12368.76 48.32 870.15 0.00 9647.67 546.13 21189.97 00:28:30.813 [2024-11-19T09:56:10.008Z] =================================================================================================================== 00:28:30.813 [2024-11-19T09:56:10.008Z] Total : 12368.76 48.32 870.15 0.00 9647.67 546.13 21189.97 00:28:30.813 Received shutdown signal, test time was about 15.000000 seconds 00:28:30.813 00:28:30.813 Latency(us) 00:28:30.813 [2024-11-19T09:56:10.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.813 [2024-11-19T09:56:10.008Z] =================================================================================================================== 00:28:30.813 [2024-11-19T09:56:10.008Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:30.813 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:30.813 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:28:30.813 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:28:30.813 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1139899 00:28:30.813 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1139899 /var/tmp/bdevperf.sock 00:28:30.813 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:30.813 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1139899 ']' 00:28:30.813 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:30.813 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:30.813 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:30.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:30.813 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:30.813 10:56:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:31.384 10:56:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:31.384 10:56:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:28:31.384 10:56:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:31.384 [2024-11-19 10:56:10.507549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:31.384 10:56:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:31.646 [2024-11-19 10:56:10.692007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:31.646 10:56:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:31.907 NVMe0n1 00:28:31.907 10:56:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:32.168 00:28:32.168 10:56:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:32.429 00:28:32.429 10:56:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:32.429 10:56:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:28:32.690 10:56:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:32.951 10:56:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:28:36.265 10:56:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:36.265 10:56:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:28:36.265 10:56:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1141026 00:28:36.265 10:56:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:36.265 10:56:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1141026 00:28:37.206 { 00:28:37.206 "results": [ 00:28:37.206 { 00:28:37.206 "job": "NVMe0n1", 00:28:37.206 "core_mask": "0x1", 00:28:37.206 "workload": "verify", 00:28:37.206 "status": "finished", 00:28:37.206 "verify_range": { 00:28:37.206 "start": 0, 00:28:37.206 "length": 16384 00:28:37.206 }, 00:28:37.206 "queue_depth": 128, 00:28:37.206 "io_size": 4096, 00:28:37.206 "runtime": 1.009488, 00:28:37.206 "iops": 12629.17439335584, 00:28:37.206 "mibps": 49.33271247404625, 00:28:37.206 "io_failed": 0, 00:28:37.206 "io_timeout": 0, 00:28:37.206 "avg_latency_us": 10086.502541898712, 00:28:37.206 "min_latency_us": 1897.8133333333333, 00:28:37.206 "max_latency_us": 14854.826666666666 00:28:37.206 } 00:28:37.206 ], 00:28:37.206 "core_count": 1 00:28:37.206 } 00:28:37.206 10:56:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:37.206 [2024-11-19 10:56:09.555134] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:28:37.206 [2024-11-19 10:56:09.555201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1139899 ] 00:28:37.206 [2024-11-19 10:56:09.640673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.206 [2024-11-19 10:56:09.669184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.206 [2024-11-19 10:56:11.946860] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:37.206 [2024-11-19 10:56:11.946899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.206 [2024-11-19 10:56:11.946907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.206 [2024-11-19 10:56:11.946915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.206 [2024-11-19 10:56:11.946921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.206 [2024-11-19 10:56:11.946927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.206 [2024-11-19 10:56:11.946932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.206 [2024-11-19 10:56:11.946938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.206 [2024-11-19 10:56:11.946943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.206 [2024-11-19 10:56:11.946948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:28:37.206 [2024-11-19 10:56:11.946967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:28:37.206 [2024-11-19 10:56:11.946978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c40d70 (9): Bad file descriptor 00:28:37.206 [2024-11-19 10:56:12.040315] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:28:37.206 Running I/O for 1 seconds... 00:28:37.206 12534.00 IOPS, 48.96 MiB/s 00:28:37.206 Latency(us) 00:28:37.206 [2024-11-19T09:56:16.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.206 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:37.206 Verification LBA range: start 0x0 length 0x4000 00:28:37.206 NVMe0n1 : 1.01 12629.17 49.33 0.00 0.00 10086.50 1897.81 14854.83 00:28:37.206 [2024-11-19T09:56:16.401Z] =================================================================================================================== 00:28:37.206 [2024-11-19T09:56:16.401Z] Total : 12629.17 49.33 0.00 0.00 10086.50 1897.81 14854.83 00:28:37.206 10:56:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:37.206 10:56:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:28:37.466 10:56:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:37.726 10:56:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:37.726 10:56:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:28:37.726 10:56:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:37.986 10:56:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:28:41.284 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:41.284 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:28:41.284 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1139899 00:28:41.284 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1139899 ']' 00:28:41.284 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1139899 00:28:41.284 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:28:41.284 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:41.284 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1139899 00:28:41.284 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:41.284 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:41.284 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1139899' 00:28:41.284 killing process with pid 1139899 00:28:41.284 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1139899 00:28:41.284 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1139899 00:28:41.284 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:28:41.284 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:41.544 rmmod nvme_tcp 00:28:41.544 rmmod nvme_fabrics 00:28:41.544 rmmod nvme_keyring 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1136235 ']' 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1136235 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1136235 ']' 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1136235 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1136235 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1136235' 00:28:41.544 killing process with pid 1136235 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1136235 00:28:41.544 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1136235 00:28:41.804 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:41.804 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:41.804 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:41.804 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:28:41.804 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:28:41.804 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:41.804 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:28:41.804 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:41.804 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:41.804 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.804 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.804 10:56:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.712 10:56:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:43.974 00:28:43.974 real 0m40.390s 00:28:43.974 user 2m3.859s 00:28:43.974 sys 0m8.878s 00:28:43.974 10:56:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:43.974 10:56:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:43.974 ************************************ 00:28:43.974 END TEST nvmf_failover 00:28:43.974 ************************************ 00:28:43.974 10:56:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:43.974 10:56:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:43.974 10:56:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:43.974 10:56:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.974 ************************************ 00:28:43.974 START TEST nvmf_host_discovery 00:28:43.974 ************************************ 00:28:43.974 10:56:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:43.974 * Looking for test storage... 00:28:43.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:43.974 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:43.974 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:28:43.974 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:44.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.236 --rc genhtml_branch_coverage=1 00:28:44.236 --rc genhtml_function_coverage=1 00:28:44.236 --rc genhtml_legend=1 00:28:44.236 --rc geninfo_all_blocks=1 00:28:44.236 --rc geninfo_unexecuted_blocks=1 00:28:44.236 00:28:44.236 ' 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:44.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.236 --rc genhtml_branch_coverage=1 00:28:44.236 --rc genhtml_function_coverage=1 00:28:44.236 --rc genhtml_legend=1 00:28:44.236 --rc geninfo_all_blocks=1 00:28:44.236 --rc geninfo_unexecuted_blocks=1 00:28:44.236 00:28:44.236 ' 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:44.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.236 --rc genhtml_branch_coverage=1 00:28:44.236 --rc genhtml_function_coverage=1 00:28:44.236 --rc genhtml_legend=1 00:28:44.236 --rc geninfo_all_blocks=1 00:28:44.236 --rc geninfo_unexecuted_blocks=1 00:28:44.236 00:28:44.236 ' 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:44.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.236 --rc genhtml_branch_coverage=1 00:28:44.236 --rc genhtml_function_coverage=1 00:28:44.236 --rc genhtml_legend=1 00:28:44.236 --rc geninfo_all_blocks=1 00:28:44.236 --rc geninfo_unexecuted_blocks=1 00:28:44.236 00:28:44.236 ' 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:44.236 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:44.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:28:44.237 10:56:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.374 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:52.374 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:28:52.374 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:52.374 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:52.374 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:52.374 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:52.374 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:52.374 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:28:52.374 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:52.374 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:28:52.374 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:28:52.374 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:28:52.374 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:28:52.374 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:28:52.374 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:28:52.374 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:52.375 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:52.375 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:52.375 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:52.375 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:52.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:28:52.375 00:28:52.375 --- 10.0.0.2 ping statistics --- 00:28:52.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.375 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:52.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:28:52.375 00:28:52.375 --- 10.0.0.1 ping statistics --- 00:28:52.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.375 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:52.375 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1146241 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1146241 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1146241 ']' 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.376 [2024-11-19 10:56:30.496296] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:28:52.376 [2024-11-19 10:56:30.496344] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.376 [2024-11-19 10:56:30.563381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.376 [2024-11-19 10:56:30.592230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.376 [2024-11-19 10:56:30.592261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.376 [2024-11-19 10:56:30.592267] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.376 [2024-11-19 10:56:30.592275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.376 [2024-11-19 10:56:30.592279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.376 [2024-11-19 10:56:30.592716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.376 [2024-11-19 10:56:30.726723] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.376 [2024-11-19 10:56:30.738898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.376 null0 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.376 null1 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1146387 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1146387 /tmp/host.sock 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1146387 ']' 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:52.376 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:52.376 10:56:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.376 [2024-11-19 10:56:30.843233] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:28:52.376 [2024-11-19 10:56:30.843282] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146387 ] 00:28:52.376 [2024-11-19 10:56:30.930001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.376 [2024-11-19 10:56:30.965927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:52.637 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.638 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.638 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.638 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:28:52.638 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:52.638 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:52.638 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.638 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:52.638 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.638 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:52.638 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.638 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:52.638 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:28:52.638 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:52.638 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:52.638 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.638 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:52.638 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.638 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:52.638 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.898 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:28:52.898 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:52.898 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.898 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.898 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.898 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:28:52.898 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:52.898 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:52.898 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:52.898 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.898 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.898 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:52.898 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.898 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:28:52.898 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:28:52.898 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:52.899 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:52.899 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.899 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:52.899 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.899 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:52.899 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.899 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:52.899 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:52.899 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.899 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.899 [2024-11-19 10:56:31.986010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:52.899 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.899 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:28:52.899 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:52.899 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:52.899 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.899 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:52.899 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.899 10:56:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:52.899 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.899 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:28:52.899 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:28:52.899 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:52.899 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:52.899 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.899 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.899 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:52.899 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:52.899 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.899 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:28:52.899 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:28:52.899 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:52.899 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:52.899 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:52.899 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:52.899 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:52.899 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:52.899 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:53.159 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:53.159 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:53.159 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.159 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.159 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.159 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:53.159 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:28:53.159 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:53.159 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:53.159 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:53.159 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.159 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.160 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.160 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:53.160 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:53.160 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:53.160 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:53.160 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:53.160 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:53.160 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:53.160 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:53.160 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.160 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:53.160 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.160 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:53.160 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.160 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:28:53.160 10:56:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:28:53.731 [2024-11-19 10:56:32.665084] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:53.731 [2024-11-19 10:56:32.665104] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:53.731 [2024-11-19 10:56:32.665117] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:53.731 [2024-11-19 10:56:32.752384] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:53.992 [2024-11-19 10:56:32.934558] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:28:53.992 [2024-11-19 10:56:32.935633] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x14f7780:1 started. 00:28:53.992 [2024-11-19 10:56:32.937242] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:53.992 [2024-11-19 10:56:32.937261] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:53.992 [2024-11-19 10:56:32.944366] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x14f7780 was disconnected and freed. delete nvme_qpair. 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:54.253 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:54.254 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:54.254 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:54.254 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:54.254 [2024-11-19 10:56:33.429139] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x14f7b20:1 started. 00:28:54.254 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.254 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:54.254 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:54.254 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:54.254 [2024-11-19 10:56:33.435426] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x14f7b20 was disconnected and freed. delete nvme_qpair. 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:54.515 [2024-11-19 10:56:33.521924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:54.515 [2024-11-19 10:56:33.522268] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:54.515 [2024-11-19 10:56:33.522288] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:54.515 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:54.516 [2024-11-19 10:56:33.649690] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:28:54.516 10:56:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:28:54.516 [2024-11-19 10:56:33.709434] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:28:54.516 [2024-11-19 10:56:33.709471] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:54.516 [2024-11-19 10:56:33.709480] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:54.516 [2024-11-19 10:56:33.709485] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:55.904 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:55.905 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:55.905 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.905 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.905 [2024-11-19 10:56:34.797575] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:55.905 [2024-11-19 10:56:34.797598] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:55.905 [2024-11-19 10:56:34.799122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:55.905 [2024-11-19 10:56:34.799140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.905 [2024-11-19 10:56:34.799149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:55.905 [2024-11-19 10:56:34.799157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.905 [2024-11-19 10:56:34.799225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:55.905 [2024-11-19 10:56:34.799232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.905 [2024-11-19 10:56:34.799241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:55.905 [2024-11-19 10:56:34.799249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.905 [2024-11-19 10:56:34.799256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7e10 is same with the state(6) to be set 00:28:55.905 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.905 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:55.905 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:55.905 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:55.905 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:55.905 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:55.905 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:55.905 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:55.905 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:55.905 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:55.905 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.905 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:55.905 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.905 [2024-11-19 10:56:34.809136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c7e10 (9): Bad file descriptor 00:28:55.905 [2024-11-19 10:56:34.819178] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:55.905 [2024-11-19 10:56:34.819195] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:55.905 [2024-11-19 10:56:34.819201] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:55.905 [2024-11-19 10:56:34.819206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:55.905 [2024-11-19 10:56:34.819224] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:55.905 [2024-11-19 10:56:34.819658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.905 [2024-11-19 10:56:34.819697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c7e10 with addr=10.0.0.2, port=4420 00:28:55.905 [2024-11-19 10:56:34.819708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7e10 is same with the state(6) to be set 00:28:55.905 [2024-11-19 10:56:34.819727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c7e10 (9): Bad file descriptor 00:28:55.905 [2024-11-19 10:56:34.819753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:55.905 [2024-11-19 10:56:34.819761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:55.905 [2024-11-19 10:56:34.819770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:55.905 [2024-11-19 10:56:34.819778] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:55.905 [2024-11-19 10:56:34.819784] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:55.905 [2024-11-19 10:56:34.819789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:55.905 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.905 [2024-11-19 10:56:34.829256] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:55.905 [2024-11-19 10:56:34.829271] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:55.905 [2024-11-19 10:56:34.829277] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:55.905 [2024-11-19 10:56:34.829281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:55.905 [2024-11-19 10:56:34.829298] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:55.905 [2024-11-19 10:56:34.829605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.905 [2024-11-19 10:56:34.829618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c7e10 with addr=10.0.0.2, port=4420 00:28:55.905 [2024-11-19 10:56:34.829625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7e10 is same with the state(6) to be set 00:28:55.905 [2024-11-19 10:56:34.829637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c7e10 (9): Bad file descriptor 00:28:55.905 [2024-11-19 10:56:34.829647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:55.905 [2024-11-19 10:56:34.829654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:55.905 [2024-11-19 10:56:34.829661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:55.905 [2024-11-19 10:56:34.829667] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:55.905 [2024-11-19 10:56:34.829672] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:55.905 [2024-11-19 10:56:34.829676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:55.905 [2024-11-19 10:56:34.839329] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:55.905 [2024-11-19 10:56:34.839341] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:55.905 [2024-11-19 10:56:34.839345] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:55.905 [2024-11-19 10:56:34.839350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:55.905 [2024-11-19 10:56:34.839364] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:55.905 [2024-11-19 10:56:34.839644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.905 [2024-11-19 10:56:34.839656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c7e10 with addr=10.0.0.2, port=4420 00:28:55.905 [2024-11-19 10:56:34.839663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7e10 is same with the state(6) to be set 00:28:55.905 [2024-11-19 10:56:34.839674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c7e10 (9): Bad file descriptor 00:28:55.905 [2024-11-19 10:56:34.839684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:55.905 [2024-11-19 10:56:34.839691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:55.905 [2024-11-19 10:56:34.839698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:55.905 [2024-11-19 10:56:34.839704] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:55.905 [2024-11-19 10:56:34.839709] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:55.905 [2024-11-19 10:56:34.839713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:55.905 [2024-11-19 10:56:34.849395] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:55.905 [2024-11-19 10:56:34.849410] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:55.905 [2024-11-19 10:56:34.849415] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:55.905 [2024-11-19 10:56:34.849420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:55.905 [2024-11-19 10:56:34.849435] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:55.905 [2024-11-19 10:56:34.849706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.905 [2024-11-19 10:56:34.849719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c7e10 with addr=10.0.0.2, port=4420 00:28:55.906 [2024-11-19 10:56:34.849726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7e10 is same with the state(6) to be set 00:28:55.906 [2024-11-19 10:56:34.849738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c7e10 (9): Bad file descriptor 00:28:55.906 [2024-11-19 10:56:34.849748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:55.906 [2024-11-19 10:56:34.849755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:55.906 [2024-11-19 10:56:34.849762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:55.906 [2024-11-19 10:56:34.849768] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:55.906 [2024-11-19 10:56:34.849773] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:55.906 [2024-11-19 10:56:34.849784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:55.906 [2024-11-19 10:56:34.859467] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:55.906 [2024-11-19 10:56:34.859480] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:55.906 [2024-11-19 10:56:34.859485] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:55.906 [2024-11-19 10:56:34.859489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:55.906 [2024-11-19 10:56:34.859503] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:55.906 [2024-11-19 10:56:34.859692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.906 [2024-11-19 10:56:34.859703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c7e10 with addr=10.0.0.2, port=4420 00:28:55.906 [2024-11-19 10:56:34.859710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7e10 is same with the state(6) to be set 00:28:55.906 [2024-11-19 10:56:34.859721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c7e10 (9): Bad file descriptor 00:28:55.906 [2024-11-19 10:56:34.859732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:55.906 [2024-11-19 10:56:34.859739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:55.906 [2024-11-19 10:56:34.859746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:55.906 [2024-11-19 10:56:34.859752] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:55.906 [2024-11-19 10:56:34.859757] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:55.906 [2024-11-19 10:56:34.859761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:55.906 [2024-11-19 10:56:34.869535] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:55.906 [2024-11-19 10:56:34.869549] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:55.906 [2024-11-19 10:56:34.869557] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:55.906 [2024-11-19 10:56:34.869562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:55.906 [2024-11-19 10:56:34.869577] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:55.906 [2024-11-19 10:56:34.869846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.906 [2024-11-19 10:56:34.869859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c7e10 with addr=10.0.0.2, port=4420 00:28:55.906 [2024-11-19 10:56:34.869867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7e10 is same with the state(6) to be set 00:28:55.906 [2024-11-19 10:56:34.869878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c7e10 (9): Bad file descriptor 00:28:55.906 [2024-11-19 10:56:34.869896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:55.906 [2024-11-19 10:56:34.869903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:55.906 [2024-11-19 10:56:34.869910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:55.906 [2024-11-19 10:56:34.869916] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:55.906 [2024-11-19 10:56:34.869921] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:55.906 [2024-11-19 10:56:34.869925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:55.906 [2024-11-19 10:56:34.879609] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:55.906 [2024-11-19 10:56:34.879621] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:55.906 [2024-11-19 10:56:34.879625] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:55.906 [2024-11-19 10:56:34.879630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:55.906 [2024-11-19 10:56:34.879644] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:55.906 [2024-11-19 10:56:34.879916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.906 [2024-11-19 10:56:34.879928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c7e10 with addr=10.0.0.2, port=4420 00:28:55.906 [2024-11-19 10:56:34.879935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7e10 is same with the state(6) to be set 00:28:55.906 [2024-11-19 10:56:34.879946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c7e10 (9): Bad file descriptor 00:28:55.906 [2024-11-19 10:56:34.879962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:55.906 [2024-11-19 10:56:34.879969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:55.906 [2024-11-19 10:56:34.879976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:55.906 [2024-11-19 10:56:34.879982] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:55.906 [2024-11-19 10:56:34.879987] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:55.906 [2024-11-19 10:56:34.879992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:55.906 [2024-11-19 10:56:34.884980] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:55.906 [2024-11-19 10:56:34.885001] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:28:55.906 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:55.907 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:55.907 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:55.907 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:55.907 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:55.907 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:55.907 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:55.907 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:55.907 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:55.907 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.907 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.907 10:56:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.907 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.169 10:56:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:57.113 [2024-11-19 10:56:36.246121] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:57.113 [2024-11-19 10:56:36.246135] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:57.113 [2024-11-19 10:56:36.246144] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:57.373 [2024-11-19 10:56:36.375519] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:57.634 [2024-11-19 10:56:36.680926] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:28:57.634 [2024-11-19 10:56:36.681582] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x14d9050:1 started. 00:28:57.634 [2024-11-19 10:56:36.682890] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:57.634 [2024-11-19 10:56:36.682911] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:57.634 [2024-11-19 10:56:36.693624] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x14d9050 was disconnected and freed. delete nvme_qpair. 00:28:57.634 request: 00:28:57.634 { 00:28:57.634 "name": "nvme", 00:28:57.634 "trtype": "tcp", 00:28:57.634 "traddr": "10.0.0.2", 00:28:57.634 "adrfam": "ipv4", 00:28:57.634 "trsvcid": "8009", 00:28:57.634 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:57.634 "wait_for_attach": true, 00:28:57.634 "method": "bdev_nvme_start_discovery", 00:28:57.634 "req_id": 1 00:28:57.634 } 00:28:57.634 Got JSON-RPC error response 00:28:57.634 response: 00:28:57.634 { 00:28:57.634 "code": -17, 00:28:57.634 "message": "File exists" 00:28:57.634 } 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:28:57.634 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:57.635 request: 00:28:57.635 { 00:28:57.635 "name": "nvme_second", 00:28:57.635 "trtype": "tcp", 00:28:57.635 "traddr": "10.0.0.2", 00:28:57.635 "adrfam": "ipv4", 00:28:57.635 "trsvcid": "8009", 00:28:57.635 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:57.635 "wait_for_attach": true, 00:28:57.635 "method": "bdev_nvme_start_discovery", 00:28:57.635 "req_id": 1 00:28:57.635 } 00:28:57.635 Got JSON-RPC error response 00:28:57.635 response: 00:28:57.635 { 00:28:57.635 "code": -17, 00:28:57.635 "message": "File exists" 00:28:57.635 } 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:57.635 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.896 10:56:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.837 [2024-11-19 10:56:37.942331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.837 [2024-11-19 10:56:37.942353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x162e5d0 with addr=10.0.0.2, port=8010 00:28:58.837 [2024-11-19 10:56:37.942363] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:58.837 [2024-11-19 10:56:37.942368] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:58.837 [2024-11-19 10:56:37.942373] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:59.779 [2024-11-19 10:56:38.944670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.779 [2024-11-19 10:56:38.944688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x162e5d0 with addr=10.0.0.2, port=8010 00:28:59.779 [2024-11-19 10:56:38.944697] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:59.779 [2024-11-19 10:56:38.944702] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:59.779 [2024-11-19 10:56:38.944707] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:01.160 [2024-11-19 10:56:39.946672] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:01.160 request: 00:29:01.160 { 00:29:01.160 "name": "nvme_second", 00:29:01.160 "trtype": "tcp", 00:29:01.160 "traddr": "10.0.0.2", 00:29:01.160 "adrfam": "ipv4", 00:29:01.160 "trsvcid": "8010", 00:29:01.160 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:01.160 "wait_for_attach": false, 00:29:01.160 "attach_timeout_ms": 3000, 00:29:01.160 "method": "bdev_nvme_start_discovery", 00:29:01.160 "req_id": 1 00:29:01.160 } 00:29:01.160 Got JSON-RPC error response 00:29:01.160 response: 00:29:01.160 { 00:29:01.160 "code": -110, 00:29:01.160 "message": "Connection timed out" 00:29:01.160 } 00:29:01.160 10:56:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:01.160 10:56:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:29:01.160 10:56:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:01.160 10:56:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:01.160 10:56:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:01.161 10:56:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:29:01.161 10:56:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:01.161 10:56:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:01.161 10:56:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.161 10:56:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:01.161 10:56:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:01.161 10:56:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:01.161 10:56:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1146387 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:01.161 rmmod nvme_tcp 00:29:01.161 rmmod nvme_fabrics 00:29:01.161 rmmod nvme_keyring 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1146241 ']' 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1146241 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1146241 ']' 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1146241 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1146241 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1146241' 00:29:01.161 killing process with pid 1146241 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1146241 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1146241 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:01.161 10:56:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:03.707 00:29:03.707 real 0m19.336s 00:29:03.707 user 0m22.784s 00:29:03.707 sys 0m6.791s 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:03.707 ************************************ 00:29:03.707 END TEST nvmf_host_discovery 00:29:03.707 ************************************ 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.707 ************************************ 00:29:03.707 START TEST nvmf_host_multipath_status 00:29:03.707 ************************************ 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:03.707 * Looking for test storage... 00:29:03.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:03.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.707 --rc genhtml_branch_coverage=1 00:29:03.707 --rc genhtml_function_coverage=1 00:29:03.707 --rc genhtml_legend=1 00:29:03.707 --rc geninfo_all_blocks=1 00:29:03.707 --rc geninfo_unexecuted_blocks=1 00:29:03.707 00:29:03.707 ' 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:03.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.707 --rc genhtml_branch_coverage=1 00:29:03.707 --rc genhtml_function_coverage=1 00:29:03.707 --rc genhtml_legend=1 00:29:03.707 --rc geninfo_all_blocks=1 00:29:03.707 --rc geninfo_unexecuted_blocks=1 00:29:03.707 00:29:03.707 ' 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:03.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.707 --rc genhtml_branch_coverage=1 00:29:03.707 --rc genhtml_function_coverage=1 00:29:03.707 --rc genhtml_legend=1 00:29:03.707 --rc geninfo_all_blocks=1 00:29:03.707 --rc geninfo_unexecuted_blocks=1 00:29:03.707 00:29:03.707 ' 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:03.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.707 --rc genhtml_branch_coverage=1 00:29:03.707 --rc genhtml_function_coverage=1 00:29:03.707 --rc genhtml_legend=1 00:29:03.707 --rc geninfo_all_blocks=1 00:29:03.707 --rc geninfo_unexecuted_blocks=1 00:29:03.707 00:29:03.707 ' 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:03.707 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:03.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:29:03.708 10:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:11.850 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:11.851 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:11.851 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:11.851 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:11.851 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:11.851 10:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:11.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:29:11.851 00:29:11.851 --- 10.0.0.2 ping statistics --- 00:29:11.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.851 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:11.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:29:11.851 00:29:11.851 --- 10.0.0.1 ping statistics --- 00:29:11.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.851 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1152491 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1152491 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1152491 ']' 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.851 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.852 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.852 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.852 10:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:11.852 [2024-11-19 10:56:50.187252] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:29:11.852 [2024-11-19 10:56:50.187317] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.852 [2024-11-19 10:56:50.289474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:11.852 [2024-11-19 10:56:50.342386] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.852 [2024-11-19 10:56:50.342446] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.852 [2024-11-19 10:56:50.342455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.852 [2024-11-19 10:56:50.342462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.852 [2024-11-19 10:56:50.342468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.852 [2024-11-19 10:56:50.344242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.852 [2024-11-19 10:56:50.344270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.852 10:56:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.852 10:56:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:29:11.852 10:56:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:11.852 10:56:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:11.852 10:56:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:12.113 10:56:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.113 10:56:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1152491 00:29:12.113 10:56:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:12.113 [2024-11-19 10:56:51.227166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.113 10:56:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:12.376 Malloc0 00:29:12.376 10:56:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:29:12.637 10:56:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:12.899 10:56:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:12.899 [2024-11-19 10:56:52.059473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.899 10:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:13.160 [2024-11-19 10:56:52.248014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:13.161 10:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1152927 00:29:13.161 10:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:13.161 10:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:29:13.161 10:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1152927 /var/tmp/bdevperf.sock 00:29:13.161 10:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1152927 ']' 00:29:13.161 10:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:13.161 10:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.161 10:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:13.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:13.161 10:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.161 10:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:14.105 10:56:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:14.105 10:56:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:29:14.105 10:56:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:14.367 10:56:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:14.629 Nvme0n1 00:29:14.629 10:56:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:15.203 Nvme0n1 00:29:15.203 10:56:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:29:15.203 10:56:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:29:17.118 10:56:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:29:17.118 10:56:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:29:17.379 10:56:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:17.641 10:56:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:29:18.585 10:56:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:29:18.585 10:56:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:18.585 10:56:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:18.585 10:56:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:18.847 10:56:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:18.847 10:56:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:18.847 10:56:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:18.847 10:56:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:18.847 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:18.847 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:18.847 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:18.847 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:19.108 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:19.108 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:19.108 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:19.108 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:19.369 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:19.369 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:19.369 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:19.369 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:19.369 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:19.369 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:19.369 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:19.369 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:19.629 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:19.629 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:29:19.629 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:19.890 10:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:19.890 10:56:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:29:21.282 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:29:21.282 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:21.282 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:21.282 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:21.282 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:21.282 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:21.282 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:21.282 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:21.282 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:21.282 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:21.282 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:21.282 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:21.546 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:21.546 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:21.546 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:21.546 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:21.807 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:21.807 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:21.807 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:21.807 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:21.807 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:21.807 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:21.807 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:21.807 10:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:22.068 10:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:22.068 10:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:29:22.068 10:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:22.330 10:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:29:22.330 10:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:29:23.713 10:57:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:29:23.713 10:57:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:23.713 10:57:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:23.713 10:57:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:23.713 10:57:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:23.713 10:57:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:23.713 10:57:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:23.713 10:57:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:23.713 10:57:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:23.713 10:57:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:23.713 10:57:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:23.713 10:57:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:23.974 10:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:23.974 10:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:23.974 10:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:23.974 10:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:24.234 10:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:24.234 10:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:24.234 10:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:24.234 10:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:24.234 10:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:24.234 10:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:24.234 10:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:24.235 10:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:24.495 10:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:24.495 10:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:29:24.495 10:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:24.755 10:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:24.755 10:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:29:26.140 10:57:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:29:26.140 10:57:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:26.140 10:57:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:26.140 10:57:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:26.140 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:26.140 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:26.140 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:26.140 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:26.140 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:26.140 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:26.140 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:26.140 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:26.400 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:26.400 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:26.401 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:26.401 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:26.692 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:26.692 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:26.692 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:26.692 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:26.995 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:26.995 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:26.995 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:26.995 10:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:26.995 10:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:26.995 10:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:29:26.995 10:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:27.296 10:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:27.296 10:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:29:28.271 10:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:29:28.271 10:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:28.271 10:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:28.271 10:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:28.531 10:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:28.531 10:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:28.531 10:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:28.531 10:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:28.792 10:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:28.792 10:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:28.792 10:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:28.792 10:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:28.792 10:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:28.792 10:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:28.792 10:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:28.792 10:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:29.052 10:57:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:29.052 10:57:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:29:29.052 10:57:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:29.052 10:57:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:29.313 10:57:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:29.313 10:57:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:29.313 10:57:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:29.313 10:57:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:29.574 10:57:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:29.574 10:57:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:29:29.574 10:57:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:29.574 10:57:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:29.835 10:57:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:29:30.778 10:57:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:29:30.778 10:57:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:30.778 10:57:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:30.778 10:57:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:31.039 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:31.039 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:31.039 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:31.039 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:31.300 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:31.300 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:31.300 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:31.300 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:31.300 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:31.300 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:31.300 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:31.300 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:31.561 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:31.561 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:29:31.561 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:31.561 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:31.822 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:31.822 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:31.822 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:31.822 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:31.822 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:31.822 10:57:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:29:32.083 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:29:32.083 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:29:32.343 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:32.343 10:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:29:33.727 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:29:33.727 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:33.727 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:33.727 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:33.727 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:33.727 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:33.727 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:33.727 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:33.727 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:33.727 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:33.727 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:33.727 10:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:33.987 10:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:33.988 10:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:33.988 10:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:33.988 10:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:34.254 10:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:34.254 10:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:34.254 10:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:34.254 10:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:34.254 10:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:34.254 10:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:34.254 10:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:34.254 10:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:34.515 10:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:34.515 10:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:29:34.515 10:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:34.775 10:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:34.775 10:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:29:36.160 10:57:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:29:36.160 10:57:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:36.160 10:57:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:36.160 10:57:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:36.160 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:36.160 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:36.160 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:36.160 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:36.160 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:36.160 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:36.160 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:36.160 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:36.421 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:36.421 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:36.421 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:36.421 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:36.682 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:36.682 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:36.682 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:36.682 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:36.943 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:36.944 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:36.944 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:36.944 10:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:36.944 10:57:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:36.944 10:57:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:29:36.944 10:57:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:37.204 10:57:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:29:37.465 10:57:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:29:38.407 10:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:29:38.407 10:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:38.407 10:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:38.407 10:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:38.667 10:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:38.667 10:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:38.668 10:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:38.668 10:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:38.668 10:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:38.668 10:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:38.668 10:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:38.668 10:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:38.928 10:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:38.928 10:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:38.928 10:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:38.928 10:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:39.189 10:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:39.189 10:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:39.189 10:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:39.189 10:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:39.189 10:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:39.189 10:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:39.189 10:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:39.189 10:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:39.450 10:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:39.450 10:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:29:39.450 10:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:39.711 10:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:39.711 10:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:29:41.095 10:57:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:29:41.095 10:57:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:41.095 10:57:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:41.095 10:57:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:41.095 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:41.095 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:41.095 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:41.095 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:41.095 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:41.095 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:41.095 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:41.095 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:41.356 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:41.356 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:41.356 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:41.356 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:41.617 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:41.617 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:41.617 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:41.617 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:41.617 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:41.617 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:41.617 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:41.617 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:41.878 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:41.878 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1152927 00:29:41.878 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1152927 ']' 00:29:41.878 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1152927 00:29:41.878 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:29:41.878 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:41.878 10:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1152927 00:29:41.878 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:29:41.878 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:29:41.878 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1152927' 00:29:41.878 killing process with pid 1152927 00:29:41.878 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1152927 00:29:41.878 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1152927 00:29:41.878 { 00:29:41.878 "results": [ 00:29:41.878 { 00:29:41.878 "job": "Nvme0n1", 00:29:41.878 "core_mask": "0x4", 00:29:41.878 "workload": "verify", 00:29:41.878 "status": "terminated", 00:29:41.878 "verify_range": { 00:29:41.878 "start": 0, 00:29:41.878 "length": 16384 00:29:41.878 }, 00:29:41.878 "queue_depth": 128, 00:29:41.878 "io_size": 4096, 00:29:41.878 "runtime": 26.645049, 00:29:41.878 "iops": 11997.876228337955, 00:29:41.878 "mibps": 46.866704016945135, 00:29:41.878 "io_failed": 0, 00:29:41.878 "io_timeout": 0, 00:29:41.878 "avg_latency_us": 10649.593556303516, 00:29:41.878 "min_latency_us": 549.5466666666666, 00:29:41.878 "max_latency_us": 3075822.933333333 00:29:41.878 } 00:29:41.878 ], 00:29:41.878 "core_count": 1 00:29:41.878 } 00:29:42.160 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1152927 00:29:42.160 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:42.160 [2024-11-19 10:56:52.321787] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:29:42.160 [2024-11-19 10:56:52.321871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1152927 ] 00:29:42.160 [2024-11-19 10:56:52.415021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.160 [2024-11-19 10:56:52.467593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.160 Running I/O for 90 seconds... 00:29:42.160 10758.00 IOPS, 42.02 MiB/s [2024-11-19T09:57:21.355Z] 11056.00 IOPS, 43.19 MiB/s [2024-11-19T09:57:21.355Z] 11201.00 IOPS, 43.75 MiB/s [2024-11-19T09:57:21.355Z] 11620.75 IOPS, 45.39 MiB/s [2024-11-19T09:57:21.355Z] 11900.60 IOPS, 46.49 MiB/s [2024-11-19T09:57:21.355Z] 12116.00 IOPS, 47.33 MiB/s [2024-11-19T09:57:21.355Z] 12239.29 IOPS, 47.81 MiB/s [2024-11-19T09:57:21.355Z] 12359.75 IOPS, 48.28 MiB/s [2024-11-19T09:57:21.355Z] 12412.78 IOPS, 48.49 MiB/s [2024-11-19T09:57:21.355Z] 12443.30 IOPS, 48.61 MiB/s [2024-11-19T09:57:21.355Z] 12487.91 IOPS, 48.78 MiB/s [2024-11-19T09:57:21.355Z] [2024-11-19 10:57:06.195640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.195675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.195693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.195699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.195710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.195715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.195726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.195731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.195742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.195747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.195757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.195762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.195773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.195778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.195788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.195793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.195916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.195924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.195936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.195947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.195957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.195962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.195973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.195978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.195988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.195993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:42.161 [2024-11-19 10:57:06.196450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.161 [2024-11-19 10:57:06.196455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.196468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.196473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.196483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.196488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.196498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.196504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.196514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.196519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.196530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.196535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.196545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.196550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.196682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.196690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.196701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.196707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.196717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.196722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.196732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.196738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.196748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.196753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.196763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.196769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.196779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.196785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.196796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.196801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.197233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.197241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.197252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.197257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.197267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.197272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.197283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.197288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.197298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.197304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.197314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.162 [2024-11-19 10:57:06.197319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.197329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.162 [2024-11-19 10:57:06.197335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.197345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.162 [2024-11-19 10:57:06.197350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.197361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.162 [2024-11-19 10:57:06.197366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.197377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.162 [2024-11-19 10:57:06.197382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.197392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.162 [2024-11-19 10:57:06.197397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.197409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.162 [2024-11-19 10:57:06.197415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.197425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.162 [2024-11-19 10:57:06.197430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.197441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.162 [2024-11-19 10:57:06.197445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.197456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.162 [2024-11-19 10:57:06.197461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.197471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.162 [2024-11-19 10:57:06.197477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.197488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.162 [2024-11-19 10:57:06.197493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:42.162 [2024-11-19 10:57:06.197619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.197627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.197644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.197660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.197675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.163 [2024-11-19 10:57:06.197691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.197706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.197726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.197742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.197758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.197774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.197790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.197806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.197822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.197837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.197852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.197868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.197883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.197899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.197916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.197931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.197941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.163 [2024-11-19 10:57:06.197946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.198226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.163 [2024-11-19 10:57:06.198234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.198245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.163 [2024-11-19 10:57:06.198251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.198261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.198267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.198277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.198282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.198292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.198298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.198308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.198314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.198324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.198329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.198340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.198345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.198355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.163 [2024-11-19 10:57:06.198361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:42.163 [2024-11-19 10:57:06.198371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.164 [2024-11-19 10:57:06.198744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.164 [2024-11-19 10:57:06.198977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:42.164 [2024-11-19 10:57:06.198987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.165 [2024-11-19 10:57:06.198992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.199980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.199985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.200131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.200139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.200150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.200155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.200170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.200176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.200186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.200191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.200201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.200209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.200220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.200224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.200235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.200240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.200252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.165 [2024-11-19 10:57:06.200257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:42.165 [2024-11-19 10:57:06.200267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.200272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.200282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.200288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.200298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.200303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.200313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.200318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.200328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.200334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.200344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.200349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.200359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.200364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.200374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.200379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.200389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.200395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.200405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.200410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.200420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.200425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.200437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.200442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.200452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.200457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.200467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.200473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.200483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.200488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.200498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.200503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.200513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.200518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.200528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.211940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.211982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.211990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.212000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.212005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.212016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.212021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.212031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.212037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.212047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.212052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.212062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.166 [2024-11-19 10:57:06.212071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.212081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.166 [2024-11-19 10:57:06.212087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.212097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.166 [2024-11-19 10:57:06.212102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.212112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.166 [2024-11-19 10:57:06.212118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:42.166 [2024-11-19 10:57:06.212128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.167 [2024-11-19 10:57:06.212329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.212564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.212574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.167 [2024-11-19 10:57:06.212580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.213078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.167 [2024-11-19 10:57:06.213089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.213102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.167 [2024-11-19 10:57:06.213108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.213118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.213124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.213134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.213139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.213150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.167 [2024-11-19 10:57:06.213165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:42.167 [2024-11-19 10:57:06.213176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.168 [2024-11-19 10:57:06.213497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:42.168 [2024-11-19 10:57:06.213633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.168 [2024-11-19 10:57:06.213639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.169 [2024-11-19 10:57:06.213656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.169 [2024-11-19 10:57:06.213675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.169 [2024-11-19 10:57:06.213691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.169 [2024-11-19 10:57:06.213711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.169 [2024-11-19 10:57:06.213728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.169 [2024-11-19 10:57:06.213746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.169 [2024-11-19 10:57:06.213768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.213784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.213800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.213817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.213833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.213848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.213864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.213880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.213896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.213911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.213926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.213942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.213958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.213974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.213985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.213990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.214000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.214005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.214015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.214020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.214030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.214036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.214046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.214051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.214061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.214066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.214076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.214082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.214092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.214097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.214644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.214655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.214671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.214678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.214692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.214702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.214717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.214724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.214738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.214745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.214760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.214767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.214781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.214789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.169 [2024-11-19 10:57:06.214803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.169 [2024-11-19 10:57:06.214810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.214823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.214832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.214847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.214854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.214867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.214874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.214887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.214896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.214912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.214921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.214935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.214943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.214958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.214966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.214984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.214991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.215012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.215033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.215054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.215074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.215095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.215115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.215136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.215163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.215184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.215205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.215225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.215247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.215268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.215288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.215309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.215329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.170 [2024-11-19 10:57:06.215350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.170 [2024-11-19 10:57:06.215370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.170 [2024-11-19 10:57:06.215391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.170 [2024-11-19 10:57:06.215412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.170 [2024-11-19 10:57:06.215433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.170 [2024-11-19 10:57:06.215454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.170 [2024-11-19 10:57:06.215474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.170 [2024-11-19 10:57:06.215497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.170 [2024-11-19 10:57:06.215518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.170 [2024-11-19 10:57:06.215539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.215553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.170 [2024-11-19 10:57:06.215560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:42.170 [2024-11-19 10:57:06.223201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.170 [2024-11-19 10:57:06.223226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.223250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.223270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.223291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.223311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.171 [2024-11-19 10:57:06.223333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.223354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.223374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.223399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.223420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.223441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.223462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.223483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.223503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.223524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.223545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.223565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.223586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.223607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.223627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.223641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.223648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.224320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.171 [2024-11-19 10:57:06.224335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.224351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.171 [2024-11-19 10:57:06.224358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.224372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.171 [2024-11-19 10:57:06.224379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.224393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.224400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.224414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.224420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.224434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.224441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.224455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.224462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.224475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.224482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.224496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.224502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:42.171 [2024-11-19 10:57:06.224516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.171 [2024-11-19 10:57:06.224523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.172 [2024-11-19 10:57:06.224877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.224980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.224993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.225000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.225014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.225020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.225034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.225041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.225054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.225061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.225075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.225083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.225097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.225104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.225117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.225124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.225138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.225145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.225167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.172 [2024-11-19 10:57:06.225176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:42.172 [2024-11-19 10:57:06.225195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.173 [2024-11-19 10:57:06.225204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.173 [2024-11-19 10:57:06.225233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.225785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.225795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.226570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.226584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.226605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.226615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.226635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.226645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.226663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.226673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.226691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.226701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.226720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.226729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.226748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.226758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.226776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.226786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.226805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.226815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.226833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.226843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:42.173 [2024-11-19 10:57:06.226861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.173 [2024-11-19 10:57:06.226874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.226893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.226902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.226921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.226930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.226949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.226959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.226977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.226987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.227015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.227043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.227071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.227099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.227127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.227155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.227190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.227218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.227248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.227275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.227304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.227332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.227359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.227387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.227415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.227443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.227472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.227499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.174 [2024-11-19 10:57:06.227527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.174 [2024-11-19 10:57:06.227555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.174 [2024-11-19 10:57:06.227588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.174 [2024-11-19 10:57:06.227617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.174 [2024-11-19 10:57:06.227645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.174 [2024-11-19 10:57:06.227673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.174 [2024-11-19 10:57:06.227701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.174 [2024-11-19 10:57:06.227730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.174 [2024-11-19 10:57:06.227758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.174 [2024-11-19 10:57:06.227786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.174 [2024-11-19 10:57:06.227814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.174 [2024-11-19 10:57:06.227842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.174 [2024-11-19 10:57:06.227871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:42.174 [2024-11-19 10:57:06.227890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.227899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.227918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.227929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.227948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.227958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.227976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.175 [2024-11-19 10:57:06.227986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.228004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.228014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.228032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.228042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.228061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.228070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.228089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.228098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.228117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.228126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.228145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.228155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.228177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.228186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.228205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.228214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.228233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.228243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.228261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.228272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.228292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.228301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.228320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.228329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.228348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.228357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.228377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.228386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.229209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.229224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.229245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.175 [2024-11-19 10:57:06.229254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.229273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.175 [2024-11-19 10:57:06.229282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.229301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.175 [2024-11-19 10:57:06.229311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.229329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.229339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.229358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.229367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.229386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.229395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.229414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.229424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.229445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.229455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.229473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.229483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.229502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.229511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.229530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.229539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.229558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.229567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.229586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.229596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.229615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.229626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:42.175 [2024-11-19 10:57:06.229645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.175 [2024-11-19 10:57:06.229655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.229674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.229683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.229702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.229711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.229730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.229739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.229758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.229767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.229788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.229797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.229816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.229825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.229844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.229853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.229872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.229881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.229900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.229909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.229928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.229938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.229956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.229966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.229984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.176 [2024-11-19 10:57:06.229995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.230013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.230022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.230041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.230051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.230070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.230079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.230098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.230107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.230126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.230138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.230156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.230182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.230201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.230210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.230229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.230238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.230257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.230266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.230285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.230294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.230313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.230322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.230341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.230351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.230370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.230380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.230399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.176 [2024-11-19 10:57:06.230409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:42.176 [2024-11-19 10:57:06.230427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.177 [2024-11-19 10:57:06.230437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.177 [2024-11-19 10:57:06.230466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.230497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.230527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.230555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.230583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.230611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.230639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.230667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.230695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.230723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.230751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.230779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.230807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.230835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.230865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.230893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.230922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.230951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.230979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.230998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.231008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.231776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.231790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.231810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.231820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.231839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.231848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.231867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.231876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.231895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.231904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.231923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.231932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.231951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.231963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.231982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.231992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.232010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.232019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.232038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.232047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.232066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.232075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.232094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.232103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.232122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.177 [2024-11-19 10:57:06.232131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:42.177 [2024-11-19 10:57:06.232149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.178 [2024-11-19 10:57:06.232754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.178 [2024-11-19 10:57:06.232782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.178 [2024-11-19 10:57:06.232810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.178 [2024-11-19 10:57:06.232839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.178 [2024-11-19 10:57:06.232868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.178 [2024-11-19 10:57:06.232897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.178 [2024-11-19 10:57:06.232925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.178 [2024-11-19 10:57:06.232952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.232971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.178 [2024-11-19 10:57:06.232981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.233001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.178 [2024-11-19 10:57:06.233010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.233029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.178 [2024-11-19 10:57:06.233038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.233057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.178 [2024-11-19 10:57:06.233066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.233085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.178 [2024-11-19 10:57:06.233094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:42.178 [2024-11-19 10:57:06.233113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.233122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.233141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.233150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.233173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.233182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.233201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.179 [2024-11-19 10:57:06.233210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.233229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.233239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.233258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.233267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.233286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.233295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.233314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.233323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.233342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.233353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.233372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.233381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.233399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.233409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.233427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.233437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.233455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.233465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.233483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.233493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.233512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.233521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.233540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.233549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.233568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.233578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.234398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.234413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.234433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.234443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.234462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.179 [2024-11-19 10:57:06.234471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.234490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.179 [2024-11-19 10:57:06.234502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.234521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.179 [2024-11-19 10:57:06.234531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.234549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.234558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.234577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.234587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.234605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.234615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.234633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.234642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.234661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.234670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.234689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.234698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.234717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.234726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.234745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.234754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.234773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.179 [2024-11-19 10:57:06.234782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:42.179 [2024-11-19 10:57:06.234801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.234811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.234829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.234839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.234859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.234868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.234887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.234897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.234915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.234925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.234943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.234953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.234971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.234981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.234999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.180 [2024-11-19 10:57:06.235213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.180 [2024-11-19 10:57:06.235663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.180 [2024-11-19 10:57:06.235691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.180 [2024-11-19 10:57:06.235719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:42.180 [2024-11-19 10:57:06.235733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.235740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.235753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.235760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.235773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.235779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.235792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.235799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.235813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.235819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.235832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.235841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.235854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.235861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.235875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.235882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.235895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.235901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.235914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.235921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.235934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.235940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.235953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.235960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.235973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.235980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.235993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.235999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.236012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.236019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.236033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.236039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.236589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.236600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.236614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.236621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.236636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.236643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.236656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.236663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.236677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.236683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.236697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.236703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.236716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.236723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.236736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.236743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.236756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.236763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.236776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.236783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.236796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.236802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.236816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.236822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:42.181 [2024-11-19 10:57:06.236836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.181 [2024-11-19 10:57:06.236842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.236855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.236862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.236878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.236885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.236898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.236905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.236918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.236925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.236938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.236945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.236958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.236965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.236978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.236984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.236997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.237004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.237024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.237044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.237064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.237083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.237103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.237124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.237144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.237169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.237189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.237209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.237228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.237248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.237268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.237287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.182 [2024-11-19 10:57:06.237307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.182 [2024-11-19 10:57:06.237327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.182 [2024-11-19 10:57:06.237347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.182 [2024-11-19 10:57:06.237368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.182 [2024-11-19 10:57:06.237389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.182 [2024-11-19 10:57:06.237409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.182 [2024-11-19 10:57:06.237429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.182 [2024-11-19 10:57:06.237442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.183 [2024-11-19 10:57:06.237629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.237860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.237868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.238448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.238459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.238473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.238480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.238493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.238500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.238513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.183 [2024-11-19 10:57:06.238520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.238533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.183 [2024-11-19 10:57:06.238540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.238553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.183 [2024-11-19 10:57:06.238560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.238573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.238580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.238594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.238600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.238614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.238620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.238634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.238641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.238654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.238660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.238674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.238683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.238696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.238703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.238717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.183 [2024-11-19 10:57:06.238723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:42.183 [2024-11-19 10:57:06.238737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.238743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.238759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.238766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.238779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.238785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.238799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.238805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.238818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.238825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.238838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.238845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.238858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.238865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.238878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.238885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.238898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.238904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.238918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.238925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.238940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.238947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.238960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.238967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.238980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.238987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.239001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.239008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.239021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.239028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.239041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.184 [2024-11-19 10:57:06.239048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.239061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.239067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.239081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.239088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.239101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.239109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.239123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.239129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.239143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.243083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.243125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.243135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.243154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.243169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.243183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.243189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.243203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.243210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.243223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.243230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.243243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.243250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.243263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.243270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.243283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.243290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.243303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.184 [2024-11-19 10:57:06.243310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:42.184 [2024-11-19 10:57:06.243323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.185 [2024-11-19 10:57:06.243330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.243344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.185 [2024-11-19 10:57:06.243350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.243364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.243371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.243385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.243392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.243405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.243414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.243427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.243434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.243447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.243454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.243467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.243474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.243487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.243494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.243508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.243516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.243530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.243537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.243551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.243558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.243572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.243578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.243592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.243599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.243612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.243620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.243633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.243640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.243653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.243664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.243677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.243684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.243698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.243705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.244283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.244296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.244312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.244319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.244332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.244339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.244353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.244360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.244374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.244381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.244394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.244402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.244415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.244423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.244438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.244445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.244458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.244465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.244478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.244485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.244501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.244509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.244522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.244529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.185 [2024-11-19 10:57:06.244542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.185 [2024-11-19 10:57:06.244549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.244986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.244995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.245008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.245014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.245027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.186 [2024-11-19 10:57:06.245034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.245047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.186 [2024-11-19 10:57:06.245054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.245068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.186 [2024-11-19 10:57:06.245074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.245087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.186 [2024-11-19 10:57:06.245094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.245108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.186 [2024-11-19 10:57:06.245114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.245128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.186 [2024-11-19 10:57:06.245134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.245148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.186 [2024-11-19 10:57:06.245154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.245173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.186 [2024-11-19 10:57:06.245180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.245193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.186 [2024-11-19 10:57:06.245200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.245213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.186 [2024-11-19 10:57:06.245220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:42.186 [2024-11-19 10:57:06.245233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.245239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.245254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.245261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.245274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.245281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.245294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.245301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.245314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.245321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.245334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.245341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.245354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.187 [2024-11-19 10:57:06.245361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.245374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.245381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.245394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.245401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.245414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.245421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.245434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.245441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.245455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.245461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.245474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.245481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.245496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.245503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.245516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.245522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.245536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.245542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.245556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.245562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.245576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.245582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.246155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.246171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.246185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.246192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.246206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.246213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.246227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.246234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.246247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.187 [2024-11-19 10:57:06.246253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.246267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.187 [2024-11-19 10:57:06.246274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.246287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.187 [2024-11-19 10:57:06.246293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.246307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.246316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.246329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.246336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:42.187 [2024-11-19 10:57:06.246349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.187 [2024-11-19 10:57:06.246356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.188 [2024-11-19 10:57:06.246780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.246983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.246996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.247003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.247016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.247023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.247036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.188 [2024-11-19 10:57:06.247043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.188 [2024-11-19 10:57:06.247057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.189 [2024-11-19 10:57:06.247064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.189 [2024-11-19 10:57:06.247084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.189 [2024-11-19 10:57:06.247103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.247121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.247140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.247162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.247181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.247200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.247219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.247237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.247256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.247274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.247294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.247313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.247331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.247350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.247368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.247386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.247405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.247943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.247963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.247981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.247994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.248000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.248012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.248019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.248031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.248040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.248052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.248059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.248071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.248078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.248090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.248097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.248109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.248115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.248128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.248134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.248146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.248152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.248169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.248176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.248189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.248195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.248208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.189 [2024-11-19 10:57:06.248214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:42.189 [2024-11-19 10:57:06.248226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.190 [2024-11-19 10:57:06.248644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.190 [2024-11-19 10:57:06.248663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.190 [2024-11-19 10:57:06.248682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.190 [2024-11-19 10:57:06.248700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.190 [2024-11-19 10:57:06.248719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.190 [2024-11-19 10:57:06.248740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.190 [2024-11-19 10:57:06.248759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.190 [2024-11-19 10:57:06.248777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.190 [2024-11-19 10:57:06.248796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.190 [2024-11-19 10:57:06.248815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.190 [2024-11-19 10:57:06.248833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:42.190 [2024-11-19 10:57:06.248846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.190 [2024-11-19 10:57:06.248852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.248865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.248871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.248883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.248890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.248902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.248908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.248921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.248927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.248940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.191 [2024-11-19 10:57:06.248946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.248958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.248964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.248978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.248984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.248997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.191 [2024-11-19 10:57:06.249817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.191 [2024-11-19 10:57:06.249836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.191 [2024-11-19 10:57:06.249854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.249985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.249997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.250007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.250019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.250025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:42.191 [2024-11-19 10:57:06.250038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.191 [2024-11-19 10:57:06.250044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.192 [2024-11-19 10:57:06.250314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.192 [2024-11-19 10:57:06.250615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:42.192 [2024-11-19 10:57:06.250627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.192 [2024-11-19 10:57:06.250633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.250645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.250652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.250664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.250670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.250683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.250689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.250703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.250709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.250721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.250727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.250740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.250746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.250758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.250764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.250777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.250783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.250795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.250801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.250814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.250820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.250832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.250838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.250851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.250857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.250869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.250875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.250888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.250894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.251403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.251412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.251426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.251435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.251447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.251453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.251466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.251472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.251485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.251491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.251503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.251510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.251522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.251528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.251540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.251547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.251559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.251565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.251578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.251584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.251596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.251603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.251615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.251621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.251633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.251640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.251652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.251660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.251673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.251679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.251691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.251698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.251710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.251716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:42.193 [2024-11-19 10:57:06.251729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.193 [2024-11-19 10:57:06.251735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.251748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.251754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.251766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.251773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.251785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.251791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.251803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.251809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.251822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.251829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.251841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.251848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.251860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.251867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.251879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.251885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.251899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.251905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.251917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.251924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.251936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.251942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.251955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.251961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.251973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.251980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.251992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.251998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.252017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.252035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.252054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.252073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.252091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.252110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.194 [2024-11-19 10:57:06.252130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.194 [2024-11-19 10:57:06.252149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.194 [2024-11-19 10:57:06.252172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.194 [2024-11-19 10:57:06.252191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.194 [2024-11-19 10:57:06.252209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.194 [2024-11-19 10:57:06.252228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.194 [2024-11-19 10:57:06.252247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.194 [2024-11-19 10:57:06.252265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.194 [2024-11-19 10:57:06.252284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.194 [2024-11-19 10:57:06.252303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.194 [2024-11-19 10:57:06.252321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.194 [2024-11-19 10:57:06.252340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.194 [2024-11-19 10:57:06.252361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.194 [2024-11-19 10:57:06.252379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.194 [2024-11-19 10:57:06.252398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:42.194 [2024-11-19 10:57:06.252410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.252417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.252429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.195 [2024-11-19 10:57:06.252435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.252448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.252454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.252467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.252473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.252485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.252491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.252504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.252510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.252523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.252529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.252541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.252548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.252561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.252567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.252579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.252587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.252600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.252606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.195 [2024-11-19 10:57:06.253268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.195 [2024-11-19 10:57:06.253287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.195 [2024-11-19 10:57:06.253306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.195 [2024-11-19 10:57:06.253570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:42.195 [2024-11-19 10:57:06.253583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.196 [2024-11-19 10:57:06.253757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.253982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.253994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.254001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.254014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.254020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.254032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.254039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.254052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.196 [2024-11-19 10:57:06.254059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.254071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.196 [2024-11-19 10:57:06.254077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.254090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.196 [2024-11-19 10:57:06.254096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.254108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.196 [2024-11-19 10:57:06.254115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.254127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.196 [2024-11-19 10:57:06.254133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.254146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.196 [2024-11-19 10:57:06.254152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.254167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.196 [2024-11-19 10:57:06.254174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:42.196 [2024-11-19 10:57:06.254186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.254192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.254205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.254211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.254223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.254229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.254242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.254248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.254260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.254266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.254280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.254286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.254298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.254305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.254317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.258183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.258410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.258423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.258448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.258455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.258471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.258477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.258493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.258499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.258514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.258520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.258535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.258542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.258557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.258563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.258578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.258584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.258599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.258605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.258620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.258632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.258647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.258654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.258669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.258674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.258689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.258695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.258710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.258716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.258731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.258737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.258752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.258758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.258773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.258779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.258794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.197 [2024-11-19 10:57:06.258800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:42.197 [2024-11-19 10:57:06.258815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.258821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.258836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.258842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.258857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.258864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.258879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.258886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.258901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.258907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.258922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.258928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.258943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.258949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.258964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.258970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.258985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.258991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.259012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.259033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.259054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.259075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.259096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.259117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.259137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.259168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.259189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.259210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.259231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.259252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.198 [2024-11-19 10:57:06.259273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.198 [2024-11-19 10:57:06.259294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.198 [2024-11-19 10:57:06.259315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.198 [2024-11-19 10:57:06.259336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.198 [2024-11-19 10:57:06.259357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.198 [2024-11-19 10:57:06.259378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.198 [2024-11-19 10:57:06.259399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.198 [2024-11-19 10:57:06.259421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.198 [2024-11-19 10:57:06.259442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.198 [2024-11-19 10:57:06.259463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.198 [2024-11-19 10:57:06.259484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.198 [2024-11-19 10:57:06.259505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.198 [2024-11-19 10:57:06.259526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.198 [2024-11-19 10:57:06.259547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:42.198 [2024-11-19 10:57:06.259562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.198 [2024-11-19 10:57:06.259568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:06.259583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.199 [2024-11-19 10:57:06.259589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:06.259604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.199 [2024-11-19 10:57:06.259610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:06.259625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.199 [2024-11-19 10:57:06.259631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:06.259646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.199 [2024-11-19 10:57:06.259652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:06.259666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.199 [2024-11-19 10:57:06.259673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:06.259688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.199 [2024-11-19 10:57:06.259694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:06.259709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.199 [2024-11-19 10:57:06.259715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:06.259730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.199 [2024-11-19 10:57:06.259736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:06.259751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.199 [2024-11-19 10:57:06.259757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:06.259772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.199 [2024-11-19 10:57:06.259778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:06.259902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.199 [2024-11-19 10:57:06.259909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:42.199 12332.17 IOPS, 48.17 MiB/s [2024-11-19T09:57:21.394Z] 11383.54 IOPS, 44.47 MiB/s [2024-11-19T09:57:21.394Z] 10570.43 IOPS, 41.29 MiB/s [2024-11-19T09:57:21.394Z] 9959.27 IOPS, 38.90 MiB/s [2024-11-19T09:57:21.394Z] 10149.44 IOPS, 39.65 MiB/s [2024-11-19T09:57:21.394Z] 10312.76 IOPS, 40.28 MiB/s [2024-11-19T09:57:21.394Z] 10711.67 IOPS, 41.84 MiB/s [2024-11-19T09:57:21.394Z] 11041.11 IOPS, 43.13 MiB/s [2024-11-19T09:57:21.394Z] 11214.15 IOPS, 43.81 MiB/s [2024-11-19T09:57:21.394Z] 11298.48 IOPS, 44.13 MiB/s [2024-11-19T09:57:21.394Z] 11370.32 IOPS, 44.42 MiB/s [2024-11-19T09:57:21.394Z] 11606.83 IOPS, 45.34 MiB/s [2024-11-19T09:57:21.394Z] 11830.50 IOPS, 46.21 MiB/s [2024-11-19T09:57:21.394Z] [2024-11-19 10:57:18.843638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.199 [2024-11-19 10:57:18.843673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:18.843700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.199 [2024-11-19 10:57:18.843707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:18.843718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.199 [2024-11-19 10:57:18.843723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:18.843734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.199 [2024-11-19 10:57:18.843739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:18.843750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.199 [2024-11-19 10:57:18.843761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:18.843771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.199 [2024-11-19 10:57:18.843776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:18.843787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.199 [2024-11-19 10:57:18.843792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:18.843802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.199 [2024-11-19 10:57:18.843807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:18.843818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.199 [2024-11-19 10:57:18.843823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:18.843833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.199 [2024-11-19 10:57:18.843838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:18.843849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.199 [2024-11-19 10:57:18.843854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:18.843864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.199 [2024-11-19 10:57:18.843869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:18.843880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.199 [2024-11-19 10:57:18.843885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:18.843895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.199 [2024-11-19 10:57:18.843901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:18.843911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.199 [2024-11-19 10:57:18.843916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:18.843926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.199 [2024-11-19 10:57:18.843931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:18.843942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.199 [2024-11-19 10:57:18.843948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:42.199 [2024-11-19 10:57:18.843958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.199 [2024-11-19 10:57:18.843964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.843973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.843979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.843989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.843994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.844004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.844010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.844020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.844025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.844035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.844041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.844051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.844057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.844067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.844073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.844083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.844089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.844100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.200 [2024-11-19 10:57:18.844106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.844116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.200 [2024-11-19 10:57:18.844122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.844132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.844137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.844149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.200 [2024-11-19 10:57:18.844154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.844169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.200 [2024-11-19 10:57:18.844174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.844185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.200 [2024-11-19 10:57:18.844190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.844200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.200 [2024-11-19 10:57:18.844205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.845056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.845068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.845081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.845086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.845096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.845102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.845113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.845118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.845128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.845133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.845144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.845149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.845164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.845169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.845179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.845185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.845197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.845202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.845213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.845218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.845228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.845233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.845244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.845249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.845259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.845264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.845275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.845280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.845291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.200 [2024-11-19 10:57:18.845296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:42.200 [2024-11-19 10:57:18.845306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.200 [2024-11-19 10:57:18.845311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.201 [2024-11-19 10:57:18.845326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.201 [2024-11-19 10:57:18.845342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.201 [2024-11-19 10:57:18.845357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.201 [2024-11-19 10:57:18.845372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.201 [2024-11-19 10:57:18.845389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.201 [2024-11-19 10:57:18.845405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.201 [2024-11-19 10:57:18.845420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.201 [2024-11-19 10:57:18.845435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.201 [2024-11-19 10:57:18.845451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.201 [2024-11-19 10:57:18.845466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.201 [2024-11-19 10:57:18.845481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.201 [2024-11-19 10:57:18.845497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.201 [2024-11-19 10:57:18.845512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.201 [2024-11-19 10:57:18.845528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.201 [2024-11-19 10:57:18.845876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.201 [2024-11-19 10:57:18.845893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.201 [2024-11-19 10:57:18.845911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.201 [2024-11-19 10:57:18.845927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.201 [2024-11-19 10:57:18.845943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.201 [2024-11-19 10:57:18.845958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.201 [2024-11-19 10:57:18.845973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.845983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.201 [2024-11-19 10:57:18.845989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.846000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.201 [2024-11-19 10:57:18.846005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.846015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.201 [2024-11-19 10:57:18.846020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.846031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.201 [2024-11-19 10:57:18.846036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.846046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.201 [2024-11-19 10:57:18.846051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.846062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.201 [2024-11-19 10:57:18.846067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.846077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.201 [2024-11-19 10:57:18.846082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:42.201 [2024-11-19 10:57:18.846093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.201 [2024-11-19 10:57:18.846100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:42.201 11946.32 IOPS, 46.67 MiB/s [2024-11-19T09:57:21.396Z] 11979.31 IOPS, 46.79 MiB/s [2024-11-19T09:57:21.396Z] Received shutdown signal, test time was about 26.645657 seconds 00:29:42.201 00:29:42.201 Latency(us) 00:29:42.201 [2024-11-19T09:57:21.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.201 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:42.201 Verification LBA range: start 0x0 length 0x4000 00:29:42.201 Nvme0n1 : 26.65 11997.88 46.87 0.00 0.00 10649.59 549.55 3075822.93 00:29:42.201 [2024-11-19T09:57:21.396Z] =================================================================================================================== 00:29:42.202 [2024-11-19T09:57:21.397Z] Total : 11997.88 46.87 0.00 0.00 10649.59 549.55 3075822.93 00:29:42.202 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:42.202 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:29:42.202 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:42.202 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:29:42.202 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:42.202 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:42.463 rmmod nvme_tcp 00:29:42.463 rmmod nvme_fabrics 00:29:42.463 rmmod nvme_keyring 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1152491 ']' 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1152491 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1152491 ']' 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1152491 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1152491 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1152491' 00:29:42.463 killing process with pid 1152491 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1152491 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1152491 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.463 10:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:45.009 00:29:45.009 real 0m41.233s 00:29:45.009 user 1m46.560s 00:29:45.009 sys 0m11.531s 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:45.009 ************************************ 00:29:45.009 END TEST nvmf_host_multipath_status 00:29:45.009 ************************************ 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.009 ************************************ 00:29:45.009 START TEST nvmf_discovery_remove_ifc 00:29:45.009 ************************************ 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:45.009 * Looking for test storage... 00:29:45.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:45.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.009 --rc genhtml_branch_coverage=1 00:29:45.009 --rc genhtml_function_coverage=1 00:29:45.009 --rc genhtml_legend=1 00:29:45.009 --rc geninfo_all_blocks=1 00:29:45.009 --rc geninfo_unexecuted_blocks=1 00:29:45.009 00:29:45.009 ' 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:45.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.009 --rc genhtml_branch_coverage=1 00:29:45.009 --rc genhtml_function_coverage=1 00:29:45.009 --rc genhtml_legend=1 00:29:45.009 --rc geninfo_all_blocks=1 00:29:45.009 --rc geninfo_unexecuted_blocks=1 00:29:45.009 00:29:45.009 ' 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:45.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.009 --rc genhtml_branch_coverage=1 00:29:45.009 --rc genhtml_function_coverage=1 00:29:45.009 --rc genhtml_legend=1 00:29:45.009 --rc geninfo_all_blocks=1 00:29:45.009 --rc geninfo_unexecuted_blocks=1 00:29:45.009 00:29:45.009 ' 00:29:45.009 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:45.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.009 --rc genhtml_branch_coverage=1 00:29:45.009 --rc genhtml_function_coverage=1 00:29:45.009 --rc genhtml_legend=1 00:29:45.010 --rc geninfo_all_blocks=1 00:29:45.010 --rc geninfo_unexecuted_blocks=1 00:29:45.010 00:29:45.010 ' 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:45.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:29:45.010 10:57:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:53.151 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:53.152 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:53.152 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:53.152 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:53.152 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:53.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:29:53.152 00:29:53.152 --- 10.0.0.2 ping statistics --- 00:29:53.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.152 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:29:53.152 00:29:53.152 --- 10.0.0.1 ping statistics --- 00:29:53.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.152 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1163406 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1163406 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1163406 ']' 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.152 10:57:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:53.152 [2024-11-19 10:57:31.521611] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:29:53.152 [2024-11-19 10:57:31.521681] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.152 [2024-11-19 10:57:31.621703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.152 [2024-11-19 10:57:31.671606] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.152 [2024-11-19 10:57:31.671655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.152 [2024-11-19 10:57:31.671664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.153 [2024-11-19 10:57:31.671678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.153 [2024-11-19 10:57:31.671684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.153 [2024-11-19 10:57:31.672452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.414 10:57:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.414 10:57:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:29:53.414 10:57:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:53.414 10:57:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:53.414 10:57:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:53.414 10:57:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.414 10:57:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:53.414 10:57:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.414 10:57:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:53.414 [2024-11-19 10:57:32.410593] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.414 [2024-11-19 10:57:32.418865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:53.414 null0 00:29:53.414 [2024-11-19 10:57:32.450803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.414 10:57:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.414 10:57:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1163621 00:29:53.414 10:57:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1163621 /tmp/host.sock 00:29:53.414 10:57:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:53.414 10:57:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1163621 ']' 00:29:53.414 10:57:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:29:53.414 10:57:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.414 10:57:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:53.414 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:53.414 10:57:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.414 10:57:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:53.414 [2024-11-19 10:57:32.528442] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:29:53.414 [2024-11-19 10:57:32.528506] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163621 ] 00:29:53.675 [2024-11-19 10:57:32.621729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.675 [2024-11-19 10:57:32.674450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.247 10:57:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:54.247 10:57:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:29:54.247 10:57:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:54.247 10:57:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:29:54.247 10:57:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.247 10:57:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:54.247 10:57:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.247 10:57:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:29:54.247 10:57:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.247 10:57:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:54.247 10:57:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.247 10:57:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:29:54.247 10:57:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.247 10:57:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:55.663 [2024-11-19 10:57:34.500324] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:55.663 [2024-11-19 10:57:34.500345] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:55.663 [2024-11-19 10:57:34.500359] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:55.663 [2024-11-19 10:57:34.586639] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:55.663 [2024-11-19 10:57:34.689349] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:29:55.663 [2024-11-19 10:57:34.690306] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1a5f3f0:1 started. 00:29:55.663 [2024-11-19 10:57:34.691854] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:55.663 [2024-11-19 10:57:34.691894] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:55.663 [2024-11-19 10:57:34.691915] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:55.663 [2024-11-19 10:57:34.691929] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:55.663 [2024-11-19 10:57:34.691948] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:55.663 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.663 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:29:55.663 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:55.663 [2024-11-19 10:57:34.698386] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1a5f3f0 was disconnected and freed. delete nvme_qpair. 00:29:55.663 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:55.663 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:55.663 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.663 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:55.663 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:55.663 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:55.663 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.663 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:29:55.663 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:29:55.663 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:29:55.926 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:29:55.926 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:55.926 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:55.926 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:55.926 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.926 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:55.926 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:55.926 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:55.926 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.926 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:55.926 10:57:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:56.869 10:57:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:56.869 10:57:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:56.869 10:57:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:56.869 10:57:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.869 10:57:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:56.869 10:57:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:56.869 10:57:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:56.869 10:57:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.869 10:57:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:56.869 10:57:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:57.812 10:57:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:57.813 10:57:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:57.813 10:57:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:57.813 10:57:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.813 10:57:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:57.813 10:57:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:57.813 10:57:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:57.813 10:57:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.073 10:57:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:58.073 10:57:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:59.016 10:57:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:59.016 10:57:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:59.016 10:57:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:59.016 10:57:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.016 10:57:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:59.016 10:57:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:59.016 10:57:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:59.016 10:57:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.016 10:57:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:59.016 10:57:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:59.957 10:57:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:59.957 10:57:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:59.957 10:57:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:59.957 10:57:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.957 10:57:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:59.957 10:57:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:59.957 10:57:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:59.957 10:57:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.958 10:57:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:59.958 10:57:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:01.341 10:57:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:01.341 10:57:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:01.341 10:57:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:01.341 10:57:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.341 10:57:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:01.341 10:57:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:01.341 10:57:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:01.341 [2024-11-19 10:57:40.132592] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:01.341 [2024-11-19 10:57:40.132639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.341 [2024-11-19 10:57:40.132648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.341 [2024-11-19 10:57:40.132657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.341 [2024-11-19 10:57:40.132664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.341 [2024-11-19 10:57:40.132670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.341 [2024-11-19 10:57:40.132675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.341 [2024-11-19 10:57:40.132681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.341 [2024-11-19 10:57:40.132686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.341 [2024-11-19 10:57:40.132697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.341 [2024-11-19 10:57:40.132702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.341 [2024-11-19 10:57:40.132708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3bc00 is same with the state(6) to be set 00:30:01.341 [2024-11-19 10:57:40.142613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a3bc00 (9): Bad file descriptor 00:30:01.341 [2024-11-19 10:57:40.152649] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:01.341 [2024-11-19 10:57:40.152658] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:01.341 [2024-11-19 10:57:40.152662] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:01.341 [2024-11-19 10:57:40.152666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:01.341 [2024-11-19 10:57:40.152684] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:01.341 10:57:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.341 10:57:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:01.341 10:57:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:02.284 10:57:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:02.284 10:57:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:02.284 10:57:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:02.284 10:57:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.284 10:57:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:02.284 10:57:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:02.284 10:57:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:02.284 [2024-11-19 10:57:41.211255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:02.285 [2024-11-19 10:57:41.211350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a3bc00 with addr=10.0.0.2, port=4420 00:30:02.285 [2024-11-19 10:57:41.211383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3bc00 is same with the state(6) to be set 00:30:02.285 [2024-11-19 10:57:41.211440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a3bc00 (9): Bad file descriptor 00:30:02.285 [2024-11-19 10:57:41.212562] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:30:02.285 [2024-11-19 10:57:41.212632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:02.285 [2024-11-19 10:57:41.212655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:02.285 [2024-11-19 10:57:41.212679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:02.285 [2024-11-19 10:57:41.212700] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:02.285 [2024-11-19 10:57:41.212716] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:02.285 [2024-11-19 10:57:41.212730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:02.285 [2024-11-19 10:57:41.212762] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:02.285 [2024-11-19 10:57:41.212777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:02.285 10:57:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.285 10:57:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:02.285 10:57:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:03.228 [2024-11-19 10:57:42.215201] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:03.228 [2024-11-19 10:57:42.215217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:03.228 [2024-11-19 10:57:42.215225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:03.228 [2024-11-19 10:57:42.215230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:03.228 [2024-11-19 10:57:42.215236] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:30:03.229 [2024-11-19 10:57:42.215241] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:03.229 [2024-11-19 10:57:42.215244] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:03.229 [2024-11-19 10:57:42.215248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:03.229 [2024-11-19 10:57:42.215266] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:03.229 [2024-11-19 10:57:42.215283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.229 [2024-11-19 10:57:42.215290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.229 [2024-11-19 10:57:42.215298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.229 [2024-11-19 10:57:42.215303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.229 [2024-11-19 10:57:42.215309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.229 [2024-11-19 10:57:42.215314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.229 [2024-11-19 10:57:42.215320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.229 [2024-11-19 10:57:42.215325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.229 [2024-11-19 10:57:42.215331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.229 [2024-11-19 10:57:42.215335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:03.229 [2024-11-19 10:57:42.215341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:30:03.229 [2024-11-19 10:57:42.215675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2b340 (9): Bad file descriptor 00:30:03.229 [2024-11-19 10:57:42.216685] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:03.229 [2024-11-19 10:57:42.216693] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:03.229 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.490 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:03.490 10:57:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:04.434 10:57:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:04.434 10:57:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:04.434 10:57:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:04.434 10:57:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.434 10:57:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:04.434 10:57:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:04.434 10:57:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:04.434 10:57:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.434 10:57:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:04.434 10:57:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:05.377 [2024-11-19 10:57:44.234334] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:05.377 [2024-11-19 10:57:44.234349] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:05.377 [2024-11-19 10:57:44.234358] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:05.377 [2024-11-19 10:57:44.363745] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:05.377 [2024-11-19 10:57:44.422308] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:30:05.377 [2024-11-19 10:57:44.422982] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1a30130:1 started. 00:30:05.377 [2024-11-19 10:57:44.423872] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:05.377 [2024-11-19 10:57:44.423898] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:05.377 [2024-11-19 10:57:44.423912] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:05.377 [2024-11-19 10:57:44.423923] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:05.377 [2024-11-19 10:57:44.423929] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:05.377 [2024-11-19 10:57:44.432236] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1a30130 was disconnected and freed. delete nvme_qpair. 00:30:05.377 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:05.377 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:05.377 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:05.377 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.377 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:05.377 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:05.377 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:05.377 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.377 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:05.377 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:05.377 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1163621 00:30:05.377 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1163621 ']' 00:30:05.377 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1163621 00:30:05.377 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:30:05.377 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:05.377 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1163621 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1163621' 00:30:05.639 killing process with pid 1163621 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1163621 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1163621 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:05.639 rmmod nvme_tcp 00:30:05.639 rmmod nvme_fabrics 00:30:05.639 rmmod nvme_keyring 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1163406 ']' 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1163406 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1163406 ']' 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1163406 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:05.639 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1163406 00:30:05.901 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:05.901 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:05.901 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1163406' 00:30:05.901 killing process with pid 1163406 00:30:05.901 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1163406 00:30:05.901 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1163406 00:30:05.901 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:05.901 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:05.901 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:05.901 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:30:05.901 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:30:05.901 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:05.901 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:30:05.901 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:05.901 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:05.901 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.901 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:05.901 10:57:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:08.448 00:30:08.448 real 0m23.321s 00:30:08.448 user 0m27.303s 00:30:08.448 sys 0m7.109s 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:08.448 ************************************ 00:30:08.448 END TEST nvmf_discovery_remove_ifc 00:30:08.448 ************************************ 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.448 ************************************ 00:30:08.448 START TEST nvmf_identify_kernel_target 00:30:08.448 ************************************ 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:08.448 * Looking for test storage... 00:30:08.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:30:08.448 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:08.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.449 --rc genhtml_branch_coverage=1 00:30:08.449 --rc genhtml_function_coverage=1 00:30:08.449 --rc genhtml_legend=1 00:30:08.449 --rc geninfo_all_blocks=1 00:30:08.449 --rc geninfo_unexecuted_blocks=1 00:30:08.449 00:30:08.449 ' 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:08.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.449 --rc genhtml_branch_coverage=1 00:30:08.449 --rc genhtml_function_coverage=1 00:30:08.449 --rc genhtml_legend=1 00:30:08.449 --rc geninfo_all_blocks=1 00:30:08.449 --rc geninfo_unexecuted_blocks=1 00:30:08.449 00:30:08.449 ' 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:08.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.449 --rc genhtml_branch_coverage=1 00:30:08.449 --rc genhtml_function_coverage=1 00:30:08.449 --rc genhtml_legend=1 00:30:08.449 --rc geninfo_all_blocks=1 00:30:08.449 --rc geninfo_unexecuted_blocks=1 00:30:08.449 00:30:08.449 ' 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:08.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.449 --rc genhtml_branch_coverage=1 00:30:08.449 --rc genhtml_function_coverage=1 00:30:08.449 --rc genhtml_legend=1 00:30:08.449 --rc geninfo_all_blocks=1 00:30:08.449 --rc geninfo_unexecuted_blocks=1 00:30:08.449 00:30:08.449 ' 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:08.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:08.449 10:57:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:15.039 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:15.039 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:15.039 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:15.301 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:15.301 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:15.301 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:15.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:15.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:30:15.562 00:30:15.562 --- 10.0.0.2 ping statistics --- 00:30:15.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.562 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:15.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:15.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:30:15.562 00:30:15.562 --- 10.0.0.1 ping statistics --- 00:30:15.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.562 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:15.562 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:15.563 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:30:15.563 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:15.563 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:15.563 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:15.563 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:30:15.563 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:30:15.563 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:30:15.563 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:15.563 10:57:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:18.869 Waiting for block devices as requested 00:30:18.869 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:18.869 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:19.130 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:19.130 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:19.130 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:19.392 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:19.392 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:19.392 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:19.653 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:30:19.653 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:19.914 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:19.914 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:19.914 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:20.177 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:20.177 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:20.177 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:20.440 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:20.701 No valid GPT data, bailing 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:30:20.701 00:30:20.701 Discovery Log Number of Records 2, Generation counter 2 00:30:20.701 =====Discovery Log Entry 0====== 00:30:20.701 trtype: tcp 00:30:20.701 adrfam: ipv4 00:30:20.701 subtype: current discovery subsystem 00:30:20.701 treq: not specified, sq flow control disable supported 00:30:20.701 portid: 1 00:30:20.701 trsvcid: 4420 00:30:20.701 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:20.701 traddr: 10.0.0.1 00:30:20.701 eflags: none 00:30:20.701 sectype: none 00:30:20.701 =====Discovery Log Entry 1====== 00:30:20.701 trtype: tcp 00:30:20.701 adrfam: ipv4 00:30:20.701 subtype: nvme subsystem 00:30:20.701 treq: not specified, sq flow control disable supported 00:30:20.701 portid: 1 00:30:20.701 trsvcid: 4420 00:30:20.701 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:20.701 traddr: 10.0.0.1 00:30:20.701 eflags: none 00:30:20.701 sectype: none 00:30:20.701 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:30:20.701 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:30:20.963 ===================================================== 00:30:20.963 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:20.963 ===================================================== 00:30:20.963 Controller Capabilities/Features 00:30:20.963 ================================ 00:30:20.963 Vendor ID: 0000 00:30:20.963 Subsystem Vendor ID: 0000 00:30:20.963 Serial Number: 9006460fe02ec53f2da4 00:30:20.963 Model Number: Linux 00:30:20.963 Firmware Version: 6.8.9-20 00:30:20.963 Recommended Arb Burst: 0 00:30:20.963 IEEE OUI Identifier: 00 00 00 00:30:20.963 Multi-path I/O 00:30:20.963 May have multiple subsystem ports: No 00:30:20.963 May have multiple controllers: No 00:30:20.963 Associated with SR-IOV VF: No 00:30:20.963 Max Data Transfer Size: Unlimited 00:30:20.963 Max Number of Namespaces: 0 00:30:20.963 Max Number of I/O Queues: 1024 00:30:20.963 NVMe Specification Version (VS): 1.3 00:30:20.963 NVMe Specification Version (Identify): 1.3 00:30:20.963 Maximum Queue Entries: 1024 00:30:20.963 Contiguous Queues Required: No 00:30:20.963 Arbitration Mechanisms Supported 00:30:20.963 Weighted Round Robin: Not Supported 00:30:20.963 Vendor Specific: Not Supported 00:30:20.963 Reset Timeout: 7500 ms 00:30:20.963 Doorbell Stride: 4 bytes 00:30:20.963 NVM Subsystem Reset: Not Supported 00:30:20.963 Command Sets Supported 00:30:20.963 NVM Command Set: Supported 00:30:20.963 Boot Partition: Not Supported 00:30:20.963 Memory Page Size Minimum: 4096 bytes 00:30:20.963 Memory Page Size Maximum: 4096 bytes 00:30:20.963 Persistent Memory Region: Not Supported 00:30:20.963 Optional Asynchronous Events Supported 00:30:20.963 Namespace Attribute Notices: Not Supported 00:30:20.963 Firmware Activation Notices: Not Supported 00:30:20.963 ANA Change Notices: Not Supported 00:30:20.963 PLE Aggregate Log Change Notices: Not Supported 00:30:20.963 LBA Status Info Alert Notices: Not Supported 00:30:20.963 EGE Aggregate Log Change Notices: Not Supported 00:30:20.963 Normal NVM Subsystem Shutdown event: Not Supported 00:30:20.963 Zone Descriptor Change Notices: Not Supported 00:30:20.963 Discovery Log Change Notices: Supported 00:30:20.963 Controller Attributes 00:30:20.963 128-bit Host Identifier: Not Supported 00:30:20.963 Non-Operational Permissive Mode: Not Supported 00:30:20.963 NVM Sets: Not Supported 00:30:20.963 Read Recovery Levels: Not Supported 00:30:20.963 Endurance Groups: Not Supported 00:30:20.963 Predictable Latency Mode: Not Supported 00:30:20.963 Traffic Based Keep ALive: Not Supported 00:30:20.963 Namespace Granularity: Not Supported 00:30:20.963 SQ Associations: Not Supported 00:30:20.963 UUID List: Not Supported 00:30:20.963 Multi-Domain Subsystem: Not Supported 00:30:20.963 Fixed Capacity Management: Not Supported 00:30:20.963 Variable Capacity Management: Not Supported 00:30:20.963 Delete Endurance Group: Not Supported 00:30:20.963 Delete NVM Set: Not Supported 00:30:20.963 Extended LBA Formats Supported: Not Supported 00:30:20.963 Flexible Data Placement Supported: Not Supported 00:30:20.963 00:30:20.963 Controller Memory Buffer Support 00:30:20.963 ================================ 00:30:20.963 Supported: No 00:30:20.963 00:30:20.963 Persistent Memory Region Support 00:30:20.963 ================================ 00:30:20.963 Supported: No 00:30:20.963 00:30:20.963 Admin Command Set Attributes 00:30:20.963 ============================ 00:30:20.963 Security Send/Receive: Not Supported 00:30:20.963 Format NVM: Not Supported 00:30:20.963 Firmware Activate/Download: Not Supported 00:30:20.963 Namespace Management: Not Supported 00:30:20.963 Device Self-Test: Not Supported 00:30:20.963 Directives: Not Supported 00:30:20.963 NVMe-MI: Not Supported 00:30:20.963 Virtualization Management: Not Supported 00:30:20.963 Doorbell Buffer Config: Not Supported 00:30:20.963 Get LBA Status Capability: Not Supported 00:30:20.963 Command & Feature Lockdown Capability: Not Supported 00:30:20.963 Abort Command Limit: 1 00:30:20.963 Async Event Request Limit: 1 00:30:20.963 Number of Firmware Slots: N/A 00:30:20.963 Firmware Slot 1 Read-Only: N/A 00:30:20.963 Firmware Activation Without Reset: N/A 00:30:20.963 Multiple Update Detection Support: N/A 00:30:20.963 Firmware Update Granularity: No Information Provided 00:30:20.963 Per-Namespace SMART Log: No 00:30:20.963 Asymmetric Namespace Access Log Page: Not Supported 00:30:20.963 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:20.963 Command Effects Log Page: Not Supported 00:30:20.963 Get Log Page Extended Data: Supported 00:30:20.963 Telemetry Log Pages: Not Supported 00:30:20.963 Persistent Event Log Pages: Not Supported 00:30:20.963 Supported Log Pages Log Page: May Support 00:30:20.963 Commands Supported & Effects Log Page: Not Supported 00:30:20.963 Feature Identifiers & Effects Log Page:May Support 00:30:20.963 NVMe-MI Commands & Effects Log Page: May Support 00:30:20.963 Data Area 4 for Telemetry Log: Not Supported 00:30:20.963 Error Log Page Entries Supported: 1 00:30:20.963 Keep Alive: Not Supported 00:30:20.963 00:30:20.963 NVM Command Set Attributes 00:30:20.963 ========================== 00:30:20.963 Submission Queue Entry Size 00:30:20.963 Max: 1 00:30:20.963 Min: 1 00:30:20.963 Completion Queue Entry Size 00:30:20.963 Max: 1 00:30:20.963 Min: 1 00:30:20.963 Number of Namespaces: 0 00:30:20.963 Compare Command: Not Supported 00:30:20.963 Write Uncorrectable Command: Not Supported 00:30:20.963 Dataset Management Command: Not Supported 00:30:20.963 Write Zeroes Command: Not Supported 00:30:20.963 Set Features Save Field: Not Supported 00:30:20.963 Reservations: Not Supported 00:30:20.963 Timestamp: Not Supported 00:30:20.963 Copy: Not Supported 00:30:20.963 Volatile Write Cache: Not Present 00:30:20.963 Atomic Write Unit (Normal): 1 00:30:20.963 Atomic Write Unit (PFail): 1 00:30:20.963 Atomic Compare & Write Unit: 1 00:30:20.963 Fused Compare & Write: Not Supported 00:30:20.963 Scatter-Gather List 00:30:20.963 SGL Command Set: Supported 00:30:20.963 SGL Keyed: Not Supported 00:30:20.963 SGL Bit Bucket Descriptor: Not Supported 00:30:20.963 SGL Metadata Pointer: Not Supported 00:30:20.963 Oversized SGL: Not Supported 00:30:20.963 SGL Metadata Address: Not Supported 00:30:20.963 SGL Offset: Supported 00:30:20.963 Transport SGL Data Block: Not Supported 00:30:20.963 Replay Protected Memory Block: Not Supported 00:30:20.963 00:30:20.963 Firmware Slot Information 00:30:20.963 ========================= 00:30:20.963 Active slot: 0 00:30:20.963 00:30:20.963 00:30:20.963 Error Log 00:30:20.963 ========= 00:30:20.963 00:30:20.963 Active Namespaces 00:30:20.963 ================= 00:30:20.963 Discovery Log Page 00:30:20.963 ================== 00:30:20.963 Generation Counter: 2 00:30:20.963 Number of Records: 2 00:30:20.963 Record Format: 0 00:30:20.963 00:30:20.963 Discovery Log Entry 0 00:30:20.963 ---------------------- 00:30:20.963 Transport Type: 3 (TCP) 00:30:20.963 Address Family: 1 (IPv4) 00:30:20.963 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:20.963 Entry Flags: 00:30:20.963 Duplicate Returned Information: 0 00:30:20.963 Explicit Persistent Connection Support for Discovery: 0 00:30:20.963 Transport Requirements: 00:30:20.963 Secure Channel: Not Specified 00:30:20.963 Port ID: 1 (0x0001) 00:30:20.964 Controller ID: 65535 (0xffff) 00:30:20.964 Admin Max SQ Size: 32 00:30:20.964 Transport Service Identifier: 4420 00:30:20.964 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:20.964 Transport Address: 10.0.0.1 00:30:20.964 Discovery Log Entry 1 00:30:20.964 ---------------------- 00:30:20.964 Transport Type: 3 (TCP) 00:30:20.964 Address Family: 1 (IPv4) 00:30:20.964 Subsystem Type: 2 (NVM Subsystem) 00:30:20.964 Entry Flags: 00:30:20.964 Duplicate Returned Information: 0 00:30:20.964 Explicit Persistent Connection Support for Discovery: 0 00:30:20.964 Transport Requirements: 00:30:20.964 Secure Channel: Not Specified 00:30:20.964 Port ID: 1 (0x0001) 00:30:20.964 Controller ID: 65535 (0xffff) 00:30:20.964 Admin Max SQ Size: 32 00:30:20.964 Transport Service Identifier: 4420 00:30:20.964 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:30:20.964 Transport Address: 10.0.0.1 00:30:20.964 10:57:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:20.964 get_feature(0x01) failed 00:30:20.964 get_feature(0x02) failed 00:30:20.964 get_feature(0x04) failed 00:30:20.964 ===================================================== 00:30:20.964 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:20.964 ===================================================== 00:30:20.964 Controller Capabilities/Features 00:30:20.964 ================================ 00:30:20.964 Vendor ID: 0000 00:30:20.964 Subsystem Vendor ID: 0000 00:30:20.964 Serial Number: 65972f8c4ca21138b306 00:30:20.964 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:30:20.964 Firmware Version: 6.8.9-20 00:30:20.964 Recommended Arb Burst: 6 00:30:20.964 IEEE OUI Identifier: 00 00 00 00:30:20.964 Multi-path I/O 00:30:20.964 May have multiple subsystem ports: Yes 00:30:20.964 May have multiple controllers: Yes 00:30:20.964 Associated with SR-IOV VF: No 00:30:20.964 Max Data Transfer Size: Unlimited 00:30:20.964 Max Number of Namespaces: 1024 00:30:20.964 Max Number of I/O Queues: 128 00:30:20.964 NVMe Specification Version (VS): 1.3 00:30:20.964 NVMe Specification Version (Identify): 1.3 00:30:20.964 Maximum Queue Entries: 1024 00:30:20.964 Contiguous Queues Required: No 00:30:20.964 Arbitration Mechanisms Supported 00:30:20.964 Weighted Round Robin: Not Supported 00:30:20.964 Vendor Specific: Not Supported 00:30:20.964 Reset Timeout: 7500 ms 00:30:20.964 Doorbell Stride: 4 bytes 00:30:20.964 NVM Subsystem Reset: Not Supported 00:30:20.964 Command Sets Supported 00:30:20.964 NVM Command Set: Supported 00:30:20.964 Boot Partition: Not Supported 00:30:20.964 Memory Page Size Minimum: 4096 bytes 00:30:20.964 Memory Page Size Maximum: 4096 bytes 00:30:20.964 Persistent Memory Region: Not Supported 00:30:20.964 Optional Asynchronous Events Supported 00:30:20.964 Namespace Attribute Notices: Supported 00:30:20.964 Firmware Activation Notices: Not Supported 00:30:20.964 ANA Change Notices: Supported 00:30:20.964 PLE Aggregate Log Change Notices: Not Supported 00:30:20.964 LBA Status Info Alert Notices: Not Supported 00:30:20.964 EGE Aggregate Log Change Notices: Not Supported 00:30:20.964 Normal NVM Subsystem Shutdown event: Not Supported 00:30:20.964 Zone Descriptor Change Notices: Not Supported 00:30:20.964 Discovery Log Change Notices: Not Supported 00:30:20.964 Controller Attributes 00:30:20.964 128-bit Host Identifier: Supported 00:30:20.964 Non-Operational Permissive Mode: Not Supported 00:30:20.964 NVM Sets: Not Supported 00:30:20.964 Read Recovery Levels: Not Supported 00:30:20.964 Endurance Groups: Not Supported 00:30:20.964 Predictable Latency Mode: Not Supported 00:30:20.964 Traffic Based Keep ALive: Supported 00:30:20.964 Namespace Granularity: Not Supported 00:30:20.964 SQ Associations: Not Supported 00:30:20.964 UUID List: Not Supported 00:30:20.964 Multi-Domain Subsystem: Not Supported 00:30:20.964 Fixed Capacity Management: Not Supported 00:30:20.964 Variable Capacity Management: Not Supported 00:30:20.964 Delete Endurance Group: Not Supported 00:30:20.964 Delete NVM Set: Not Supported 00:30:20.964 Extended LBA Formats Supported: Not Supported 00:30:20.964 Flexible Data Placement Supported: Not Supported 00:30:20.964 00:30:20.964 Controller Memory Buffer Support 00:30:20.964 ================================ 00:30:20.964 Supported: No 00:30:20.964 00:30:20.964 Persistent Memory Region Support 00:30:20.964 ================================ 00:30:20.964 Supported: No 00:30:20.964 00:30:20.964 Admin Command Set Attributes 00:30:20.964 ============================ 00:30:20.964 Security Send/Receive: Not Supported 00:30:20.964 Format NVM: Not Supported 00:30:20.964 Firmware Activate/Download: Not Supported 00:30:20.964 Namespace Management: Not Supported 00:30:20.964 Device Self-Test: Not Supported 00:30:20.964 Directives: Not Supported 00:30:20.964 NVMe-MI: Not Supported 00:30:20.964 Virtualization Management: Not Supported 00:30:20.964 Doorbell Buffer Config: Not Supported 00:30:20.964 Get LBA Status Capability: Not Supported 00:30:20.964 Command & Feature Lockdown Capability: Not Supported 00:30:20.964 Abort Command Limit: 4 00:30:20.964 Async Event Request Limit: 4 00:30:20.964 Number of Firmware Slots: N/A 00:30:20.964 Firmware Slot 1 Read-Only: N/A 00:30:20.964 Firmware Activation Without Reset: N/A 00:30:20.964 Multiple Update Detection Support: N/A 00:30:20.964 Firmware Update Granularity: No Information Provided 00:30:20.964 Per-Namespace SMART Log: Yes 00:30:20.964 Asymmetric Namespace Access Log Page: Supported 00:30:20.964 ANA Transition Time : 10 sec 00:30:20.964 00:30:20.964 Asymmetric Namespace Access Capabilities 00:30:20.964 ANA Optimized State : Supported 00:30:20.964 ANA Non-Optimized State : Supported 00:30:20.964 ANA Inaccessible State : Supported 00:30:20.964 ANA Persistent Loss State : Supported 00:30:20.964 ANA Change State : Supported 00:30:20.964 ANAGRPID is not changed : No 00:30:20.964 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:30:20.964 00:30:20.964 ANA Group Identifier Maximum : 128 00:30:20.964 Number of ANA Group Identifiers : 128 00:30:20.964 Max Number of Allowed Namespaces : 1024 00:30:20.964 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:30:20.964 Command Effects Log Page: Supported 00:30:20.964 Get Log Page Extended Data: Supported 00:30:20.964 Telemetry Log Pages: Not Supported 00:30:20.964 Persistent Event Log Pages: Not Supported 00:30:20.964 Supported Log Pages Log Page: May Support 00:30:20.964 Commands Supported & Effects Log Page: Not Supported 00:30:20.964 Feature Identifiers & Effects Log Page:May Support 00:30:20.964 NVMe-MI Commands & Effects Log Page: May Support 00:30:20.964 Data Area 4 for Telemetry Log: Not Supported 00:30:20.964 Error Log Page Entries Supported: 128 00:30:20.964 Keep Alive: Supported 00:30:20.964 Keep Alive Granularity: 1000 ms 00:30:20.964 00:30:20.964 NVM Command Set Attributes 00:30:20.964 ========================== 00:30:20.964 Submission Queue Entry Size 00:30:20.964 Max: 64 00:30:20.964 Min: 64 00:30:20.964 Completion Queue Entry Size 00:30:20.964 Max: 16 00:30:20.964 Min: 16 00:30:20.964 Number of Namespaces: 1024 00:30:20.964 Compare Command: Not Supported 00:30:20.964 Write Uncorrectable Command: Not Supported 00:30:20.964 Dataset Management Command: Supported 00:30:20.964 Write Zeroes Command: Supported 00:30:20.964 Set Features Save Field: Not Supported 00:30:20.964 Reservations: Not Supported 00:30:20.964 Timestamp: Not Supported 00:30:20.964 Copy: Not Supported 00:30:20.964 Volatile Write Cache: Present 00:30:20.964 Atomic Write Unit (Normal): 1 00:30:20.964 Atomic Write Unit (PFail): 1 00:30:20.964 Atomic Compare & Write Unit: 1 00:30:20.964 Fused Compare & Write: Not Supported 00:30:20.964 Scatter-Gather List 00:30:20.964 SGL Command Set: Supported 00:30:20.964 SGL Keyed: Not Supported 00:30:20.964 SGL Bit Bucket Descriptor: Not Supported 00:30:20.964 SGL Metadata Pointer: Not Supported 00:30:20.964 Oversized SGL: Not Supported 00:30:20.964 SGL Metadata Address: Not Supported 00:30:20.964 SGL Offset: Supported 00:30:20.964 Transport SGL Data Block: Not Supported 00:30:20.964 Replay Protected Memory Block: Not Supported 00:30:20.964 00:30:20.964 Firmware Slot Information 00:30:20.964 ========================= 00:30:20.964 Active slot: 0 00:30:20.964 00:30:20.964 Asymmetric Namespace Access 00:30:20.964 =========================== 00:30:20.964 Change Count : 0 00:30:20.964 Number of ANA Group Descriptors : 1 00:30:20.964 ANA Group Descriptor : 0 00:30:20.964 ANA Group ID : 1 00:30:20.964 Number of NSID Values : 1 00:30:20.964 Change Count : 0 00:30:20.964 ANA State : 1 00:30:20.964 Namespace Identifier : 1 00:30:20.964 00:30:20.964 Commands Supported and Effects 00:30:20.964 ============================== 00:30:20.964 Admin Commands 00:30:20.964 -------------- 00:30:20.964 Get Log Page (02h): Supported 00:30:20.964 Identify (06h): Supported 00:30:20.964 Abort (08h): Supported 00:30:20.964 Set Features (09h): Supported 00:30:20.964 Get Features (0Ah): Supported 00:30:20.964 Asynchronous Event Request (0Ch): Supported 00:30:20.964 Keep Alive (18h): Supported 00:30:20.964 I/O Commands 00:30:20.964 ------------ 00:30:20.964 Flush (00h): Supported 00:30:20.964 Write (01h): Supported LBA-Change 00:30:20.964 Read (02h): Supported 00:30:20.964 Write Zeroes (08h): Supported LBA-Change 00:30:20.964 Dataset Management (09h): Supported 00:30:20.964 00:30:20.964 Error Log 00:30:20.964 ========= 00:30:20.964 Entry: 0 00:30:20.964 Error Count: 0x3 00:30:20.964 Submission Queue Id: 0x0 00:30:20.964 Command Id: 0x5 00:30:20.964 Phase Bit: 0 00:30:20.964 Status Code: 0x2 00:30:20.964 Status Code Type: 0x0 00:30:20.964 Do Not Retry: 1 00:30:20.964 Error Location: 0x28 00:30:20.964 LBA: 0x0 00:30:20.964 Namespace: 0x0 00:30:20.964 Vendor Log Page: 0x0 00:30:20.964 ----------- 00:30:20.964 Entry: 1 00:30:20.964 Error Count: 0x2 00:30:20.964 Submission Queue Id: 0x0 00:30:20.964 Command Id: 0x5 00:30:20.964 Phase Bit: 0 00:30:20.964 Status Code: 0x2 00:30:20.964 Status Code Type: 0x0 00:30:20.964 Do Not Retry: 1 00:30:20.964 Error Location: 0x28 00:30:20.964 LBA: 0x0 00:30:20.964 Namespace: 0x0 00:30:20.964 Vendor Log Page: 0x0 00:30:20.964 ----------- 00:30:20.964 Entry: 2 00:30:20.964 Error Count: 0x1 00:30:20.964 Submission Queue Id: 0x0 00:30:20.964 Command Id: 0x4 00:30:20.964 Phase Bit: 0 00:30:20.964 Status Code: 0x2 00:30:20.964 Status Code Type: 0x0 00:30:20.964 Do Not Retry: 1 00:30:20.964 Error Location: 0x28 00:30:20.964 LBA: 0x0 00:30:20.964 Namespace: 0x0 00:30:20.964 Vendor Log Page: 0x0 00:30:20.964 00:30:20.964 Number of Queues 00:30:20.964 ================ 00:30:20.964 Number of I/O Submission Queues: 128 00:30:20.964 Number of I/O Completion Queues: 128 00:30:20.964 00:30:20.964 ZNS Specific Controller Data 00:30:20.964 ============================ 00:30:20.964 Zone Append Size Limit: 0 00:30:20.964 00:30:20.964 00:30:20.964 Active Namespaces 00:30:20.964 ================= 00:30:20.964 get_feature(0x05) failed 00:30:20.964 Namespace ID:1 00:30:20.964 Command Set Identifier: NVM (00h) 00:30:20.964 Deallocate: Supported 00:30:20.964 Deallocated/Unwritten Error: Not Supported 00:30:20.964 Deallocated Read Value: Unknown 00:30:20.964 Deallocate in Write Zeroes: Not Supported 00:30:20.964 Deallocated Guard Field: 0xFFFF 00:30:20.964 Flush: Supported 00:30:20.964 Reservation: Not Supported 00:30:20.964 Namespace Sharing Capabilities: Multiple Controllers 00:30:20.964 Size (in LBAs): 3750748848 (1788GiB) 00:30:20.964 Capacity (in LBAs): 3750748848 (1788GiB) 00:30:20.964 Utilization (in LBAs): 3750748848 (1788GiB) 00:30:20.964 UUID: 2c864dea-219a-48d9-94a4-5d990f45d8dc 00:30:20.964 Thin Provisioning: Not Supported 00:30:20.964 Per-NS Atomic Units: Yes 00:30:20.964 Atomic Write Unit (Normal): 8 00:30:20.964 Atomic Write Unit (PFail): 8 00:30:20.964 Preferred Write Granularity: 8 00:30:20.964 Atomic Compare & Write Unit: 8 00:30:20.964 Atomic Boundary Size (Normal): 0 00:30:20.964 Atomic Boundary Size (PFail): 0 00:30:20.964 Atomic Boundary Offset: 0 00:30:20.964 NGUID/EUI64 Never Reused: No 00:30:20.964 ANA group ID: 1 00:30:20.964 Namespace Write Protected: No 00:30:20.964 Number of LBA Formats: 1 00:30:20.964 Current LBA Format: LBA Format #00 00:30:20.964 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:20.964 00:30:20.964 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:30:20.964 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:20.964 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:30:20.964 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:20.964 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:30:20.964 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:20.964 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:20.964 rmmod nvme_tcp 00:30:20.964 rmmod nvme_fabrics 00:30:20.964 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:20.965 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:30:20.965 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:30:20.965 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:20.965 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:20.965 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:20.965 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:20.965 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:30:20.965 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:30:20.965 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:21.224 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:30:21.224 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:21.224 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:21.224 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.224 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:21.224 10:58:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.137 10:58:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:23.137 10:58:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:30:23.137 10:58:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:23.137 10:58:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:30:23.137 10:58:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:23.137 10:58:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:23.137 10:58:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:23.137 10:58:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:23.137 10:58:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:30:23.137 10:58:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:30:23.137 10:58:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:27.344 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:27.344 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:27.344 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:27.344 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:27.344 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:27.344 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:27.344 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:27.344 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:27.344 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:27.344 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:27.344 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:27.345 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:27.345 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:27.345 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:27.345 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:27.345 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:27.345 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:30:27.345 00:30:27.345 real 0m19.261s 00:30:27.345 user 0m5.312s 00:30:27.345 sys 0m10.964s 00:30:27.345 10:58:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:27.345 10:58:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:30:27.345 ************************************ 00:30:27.345 END TEST nvmf_identify_kernel_target 00:30:27.345 ************************************ 00:30:27.345 10:58:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:27.345 10:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:27.345 10:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:27.345 10:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.345 ************************************ 00:30:27.345 START TEST nvmf_auth_host 00:30:27.345 ************************************ 00:30:27.345 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:27.607 * Looking for test storage... 00:30:27.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:27.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.607 --rc genhtml_branch_coverage=1 00:30:27.607 --rc genhtml_function_coverage=1 00:30:27.607 --rc genhtml_legend=1 00:30:27.607 --rc geninfo_all_blocks=1 00:30:27.607 --rc geninfo_unexecuted_blocks=1 00:30:27.607 00:30:27.607 ' 00:30:27.607 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:27.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.608 --rc genhtml_branch_coverage=1 00:30:27.608 --rc genhtml_function_coverage=1 00:30:27.608 --rc genhtml_legend=1 00:30:27.608 --rc geninfo_all_blocks=1 00:30:27.608 --rc geninfo_unexecuted_blocks=1 00:30:27.608 00:30:27.608 ' 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:27.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.608 --rc genhtml_branch_coverage=1 00:30:27.608 --rc genhtml_function_coverage=1 00:30:27.608 --rc genhtml_legend=1 00:30:27.608 --rc geninfo_all_blocks=1 00:30:27.608 --rc geninfo_unexecuted_blocks=1 00:30:27.608 00:30:27.608 ' 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:27.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.608 --rc genhtml_branch_coverage=1 00:30:27.608 --rc genhtml_function_coverage=1 00:30:27.608 --rc genhtml_legend=1 00:30:27.608 --rc geninfo_all_blocks=1 00:30:27.608 --rc geninfo_unexecuted_blocks=1 00:30:27.608 00:30:27.608 ' 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:27.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:30:27.608 10:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:35.753 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:35.753 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:35.753 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:35.753 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:35.753 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:35.754 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:35.754 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:35.754 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:35.754 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:35.754 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:35.754 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:35.754 10:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:35.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:30:35.754 00:30:35.754 --- 10.0.0.2 ping statistics --- 00:30:35.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.754 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:30:35.754 00:30:35.754 --- 10.0.0.1 ping statistics --- 00:30:35.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.754 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1177933 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1177933 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1177933 ']' 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:35.754 10:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d3df9ddb4bb9caf825873b0193ff43cb 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kSK 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d3df9ddb4bb9caf825873b0193ff43cb 0 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d3df9ddb4bb9caf825873b0193ff43cb 0 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d3df9ddb4bb9caf825873b0193ff43cb 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kSK 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kSK 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.kSK 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=caec9247559968b88ff0ae0f17d9fa5f7573978902889d4f34fcb757ea7fd067 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.USA 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key caec9247559968b88ff0ae0f17d9fa5f7573978902889d4f34fcb757ea7fd067 3 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 caec9247559968b88ff0ae0f17d9fa5f7573978902889d4f34fcb757ea7fd067 3 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=caec9247559968b88ff0ae0f17d9fa5f7573978902889d4f34fcb757ea7fd067 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:36.016 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.USA 00:30:36.277 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.USA 00:30:36.277 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.USA 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8a24fdb751f35f9ebb453d35d64da0431395db181f75f553 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Kpy 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8a24fdb751f35f9ebb453d35d64da0431395db181f75f553 0 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8a24fdb751f35f9ebb453d35d64da0431395db181f75f553 0 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8a24fdb751f35f9ebb453d35d64da0431395db181f75f553 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Kpy 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Kpy 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Kpy 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f40b4cbb578f01c1e493a24cb5f9a9d17a7bad84b60a0aea 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.sPB 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f40b4cbb578f01c1e493a24cb5f9a9d17a7bad84b60a0aea 2 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f40b4cbb578f01c1e493a24cb5f9a9d17a7bad84b60a0aea 2 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f40b4cbb578f01c1e493a24cb5f9a9d17a7bad84b60a0aea 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.sPB 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.sPB 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.sPB 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9d46d85d705f79fb92f88005964ecfc7 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Wt4 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9d46d85d705f79fb92f88005964ecfc7 1 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9d46d85d705f79fb92f88005964ecfc7 1 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9d46d85d705f79fb92f88005964ecfc7 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Wt4 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Wt4 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Wt4 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a19aed6fcb2139d571284023f4e0f8a7 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.sRg 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a19aed6fcb2139d571284023f4e0f8a7 1 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a19aed6fcb2139d571284023f4e0f8a7 1 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a19aed6fcb2139d571284023f4e0f8a7 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:30:36.278 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.sRg 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.sRg 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.sRg 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=63750aa774600e8d713149ffafa4bde3d29674b937e826ab 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ZNy 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 63750aa774600e8d713149ffafa4bde3d29674b937e826ab 2 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 63750aa774600e8d713149ffafa4bde3d29674b937e826ab 2 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=63750aa774600e8d713149ffafa4bde3d29674b937e826ab 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ZNy 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ZNy 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ZNy 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f0143ae6018c6c7629889f089eb931be 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.uIG 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f0143ae6018c6c7629889f089eb931be 0 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f0143ae6018c6c7629889f089eb931be 0 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f0143ae6018c6c7629889f089eb931be 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.uIG 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.uIG 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.uIG 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=276023d1b6c639a075e901a8a50d734ef89ca7866e8f6414e269428011e2905f 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Ov6 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 276023d1b6c639a075e901a8a50d734ef89ca7866e8f6414e269428011e2905f 3 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 276023d1b6c639a075e901a8a50d734ef89ca7866e8f6414e269428011e2905f 3 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=276023d1b6c639a075e901a8a50d734ef89ca7866e8f6414e269428011e2905f 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Ov6 00:30:36.540 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Ov6 00:30:36.541 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Ov6 00:30:36.541 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:30:36.541 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1177933 00:30:36.541 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1177933 ']' 00:30:36.541 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.541 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:36.541 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:36.541 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:36.541 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kSK 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.USA ]] 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.USA 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Kpy 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.sPB ]] 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.sPB 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Wt4 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.sRg ]] 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sRg 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ZNy 00:30:36.802 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.803 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.803 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.803 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.uIG ]] 00:30:36.803 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.uIG 00:30:36.803 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.803 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.803 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.803 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:36.803 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Ov6 00:30:36.803 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.803 10:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:37.064 10:58:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:40.366 Waiting for block devices as requested 00:30:40.366 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:40.366 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:40.626 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:40.626 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:40.626 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:40.885 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:40.885 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:40.885 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:41.144 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:30:41.144 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:41.144 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:41.403 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:41.403 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:41.403 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:41.662 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:41.662 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:41.662 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:42.601 No valid GPT data, bailing 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:30:42.601 00:30:42.601 Discovery Log Number of Records 2, Generation counter 2 00:30:42.601 =====Discovery Log Entry 0====== 00:30:42.601 trtype: tcp 00:30:42.601 adrfam: ipv4 00:30:42.601 subtype: current discovery subsystem 00:30:42.601 treq: not specified, sq flow control disable supported 00:30:42.601 portid: 1 00:30:42.601 trsvcid: 4420 00:30:42.601 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:42.601 traddr: 10.0.0.1 00:30:42.601 eflags: none 00:30:42.601 sectype: none 00:30:42.601 =====Discovery Log Entry 1====== 00:30:42.601 trtype: tcp 00:30:42.601 adrfam: ipv4 00:30:42.601 subtype: nvme subsystem 00:30:42.601 treq: not specified, sq flow control disable supported 00:30:42.601 portid: 1 00:30:42.601 trsvcid: 4420 00:30:42.601 subnqn: nqn.2024-02.io.spdk:cnode0 00:30:42.601 traddr: 10.0.0.1 00:30:42.601 eflags: none 00:30:42.601 sectype: none 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: ]] 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:30:42.601 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:42.602 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:42.602 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:30:42.602 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:42.602 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:30:42.602 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:42.602 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:42.602 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:42.602 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:42.602 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.602 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.862 nvme0n1 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.862 10:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: ]] 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.862 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.863 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.863 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:42.863 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:42.863 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:42.863 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:42.863 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:42.863 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:42.863 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:42.863 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:42.863 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:42.863 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:42.863 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:42.863 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:42.863 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.863 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.123 nvme0n1 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:43.123 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: ]] 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.124 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.385 nvme0n1 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: ]] 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.385 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.646 nvme0n1 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: ]] 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:30:43.646 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.647 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.908 nvme0n1 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:43.908 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.909 10:58:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.909 nvme0n1 00:30:43.909 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.909 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:43.909 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:43.909 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: ]] 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.169 nvme0n1 00:30:44.169 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: ]] 00:30:44.430 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.431 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.691 nvme0n1 00:30:44.691 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.691 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:44.691 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:44.691 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.691 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.691 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.691 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:44.691 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:44.691 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.691 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.691 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.691 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: ]] 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.692 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.952 nvme0n1 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: ]] 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.953 10:58:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.220 nvme0n1 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.220 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.502 nvme0n1 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: ]] 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:45.502 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:45.503 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:45.503 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.503 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.780 nvme0n1 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: ]] 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.780 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:45.781 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:45.781 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:45.781 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:45.781 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:45.781 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:45.781 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:45.781 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:45.781 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:45.781 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:45.781 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:45.781 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:45.781 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.781 10:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.100 nvme0n1 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: ]] 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.100 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.368 nvme0n1 00:30:46.368 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: ]] 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.369 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.629 nvme0n1 00:30:46.629 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.629 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:46.629 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:46.629 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.629 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.629 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.629 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.629 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:46.629 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.629 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:46.890 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:46.891 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:46.891 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:46.891 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:46.891 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:46.891 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:46.891 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:46.891 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:46.891 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:46.891 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.891 10:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.152 nvme0n1 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: ]] 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.152 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.413 nvme0n1 00:30:47.413 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.413 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:47.413 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:47.413 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.413 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: ]] 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.674 10:58:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.935 nvme0n1 00:30:47.935 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.935 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:47.935 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:47.935 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.935 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.935 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: ]] 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.196 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.458 nvme0n1 00:30:48.458 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.458 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:48.458 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:48.458 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.458 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.458 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.458 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:48.458 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:48.458 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.458 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: ]] 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:48.718 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.719 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.719 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.719 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:48.719 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:48.719 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:48.719 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:48.719 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:48.719 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:48.719 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:48.719 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:48.719 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:48.719 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:48.719 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:48.719 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:48.719 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.719 10:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.979 nvme0n1 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:30:48.979 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:48.980 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:48.980 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:48.980 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:48.980 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:48.980 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:48.980 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.980 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.240 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.240 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:49.240 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:49.240 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:49.240 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:49.240 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:49.240 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:49.240 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:49.240 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:49.240 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:49.240 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:49.240 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:49.240 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:49.240 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.240 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.501 nvme0n1 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: ]] 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.501 10:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.443 nvme0n1 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: ]] 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.443 10:58:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.014 nvme0n1 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: ]] 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.014 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:51.015 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:51.015 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:51.015 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:51.015 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.015 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.015 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:51.015 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:51.015 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:51.015 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:51.015 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:51.015 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:51.015 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.015 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.584 nvme0n1 00:30:51.584 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.584 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:51.584 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:51.584 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.584 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: ]] 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.845 10:58:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.416 nvme0n1 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.416 10:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.358 nvme0n1 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: ]] 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.358 nvme0n1 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: ]] 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.358 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.619 nvme0n1 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: ]] 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.619 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:53.620 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:53.620 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:53.620 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.620 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.620 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:53.620 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.620 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:53.620 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:53.620 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:53.620 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:53.620 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.620 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.880 nvme0n1 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: ]] 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:30:53.880 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.881 10:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.141 nvme0n1 00:30:54.141 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.142 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.402 nvme0n1 00:30:54.402 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.402 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.402 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:54.402 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.402 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.402 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.402 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.402 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.402 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.402 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.402 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.402 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:54.402 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:54.402 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:30:54.402 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:54.402 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:54.402 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:54.402 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: ]] 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.403 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.663 nvme0n1 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:54.663 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: ]] 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.664 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.924 nvme0n1 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: ]] 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:54.924 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:54.925 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:54.925 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.925 10:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.186 nvme0n1 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: ]] 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.186 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.447 nvme0n1 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:55.447 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.448 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.710 nvme0n1 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: ]] 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:55.710 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:55.711 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:55.711 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.711 10:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.972 nvme0n1 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: ]] 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.972 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.234 nvme0n1 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: ]] 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.234 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.496 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:56.496 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:56.496 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:56.496 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:56.496 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.496 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.496 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:56.496 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:56.496 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:56.496 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:56.496 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:56.496 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:56.496 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.496 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.496 nvme0n1 00:30:56.496 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.496 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.496 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.496 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:56.496 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: ]] 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.758 10:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.019 nvme0n1 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.020 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.281 nvme0n1 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: ]] 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.281 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.852 nvme0n1 00:30:57.852 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.852 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:57.852 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:57.852 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.852 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.852 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.852 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.852 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:57.852 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.852 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.852 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: ]] 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.853 10:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.424 nvme0n1 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: ]] 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.424 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.685 nvme0n1 00:30:58.685 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.685 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:58.685 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:58.685 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.685 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.685 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: ]] 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.946 10:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.207 nvme0n1 00:30:59.207 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.207 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.207 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.207 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.207 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.207 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.467 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.468 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.729 nvme0n1 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: ]] 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.729 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.989 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.989 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.989 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:59.989 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:59.989 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:59.989 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.989 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.989 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:59.990 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.990 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:59.990 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:59.990 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:59.990 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:59.990 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.990 10:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.560 nvme0n1 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: ]] 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.560 10:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.133 nvme0n1 00:31:01.133 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.133 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.133 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:01.133 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.133 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.133 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.133 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.133 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.133 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.133 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: ]] 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.395 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.966 nvme0n1 00:31:01.966 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.966 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.966 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:01.966 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.966 10:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.966 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.966 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.966 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.966 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.966 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.966 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.966 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:01.966 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:01.966 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:01.966 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:01.966 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:01.966 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:01.966 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:31:01.966 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:31:01.966 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: ]] 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.967 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.538 nvme0n1 00:31:02.538 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.538 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.538 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:02.538 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.538 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.800 10:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.371 nvme0n1 00:31:03.371 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.371 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.371 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:03.371 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.371 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.371 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.371 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.371 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.371 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.371 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.371 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.371 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:03.371 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:03.371 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:03.371 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: ]] 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.372 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.632 nvme0n1 00:31:03.632 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.632 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.632 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:03.632 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.632 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.632 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.632 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.632 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.632 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.632 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.632 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.632 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:03.632 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:03.632 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:03.632 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:03.632 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: ]] 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.633 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.893 nvme0n1 00:31:03.893 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: ]] 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.894 10:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.894 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.894 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:03.894 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:03.894 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:03.894 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:03.894 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.894 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.894 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:03.894 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.894 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:03.894 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:03.894 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:03.894 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:03.894 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.894 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.155 nvme0n1 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: ]] 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.155 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.417 nvme0n1 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:04.417 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.418 nvme0n1 00:31:04.418 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: ]] 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.680 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.941 nvme0n1 00:31:04.941 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.941 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.941 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.941 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.941 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.941 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: ]] 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.942 10:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.204 nvme0n1 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: ]] 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.204 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.466 nvme0n1 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: ]] 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.466 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.728 nvme0n1 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:05.728 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:05.729 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.729 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.729 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:05.729 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:05.729 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:05.729 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:05.729 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:05.729 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:05.729 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.729 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.990 nvme0n1 00:31:05.990 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.990 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.990 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:05.990 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.990 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.990 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.990 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.990 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.990 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.990 10:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.990 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.990 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:05.990 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:05.990 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:05.990 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:05.990 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:05.990 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:05.990 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:05.990 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:31:05.990 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:31:05.990 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:05.990 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:05.990 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: ]] 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.991 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.251 nvme0n1 00:31:06.251 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.251 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.251 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.251 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.251 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.251 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: ]] 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.252 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.512 nvme0n1 00:31:06.512 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.512 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.512 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.512 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.512 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.512 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.512 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.512 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.512 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.512 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.512 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.512 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.512 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: ]] 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.513 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.773 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.773 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:06.773 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:06.773 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:06.773 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.773 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.773 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:06.773 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.773 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:06.773 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:06.773 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:06.773 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:06.773 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.773 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.773 nvme0n1 00:31:06.773 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.773 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.773 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.773 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.773 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.033 10:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.033 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.033 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: ]] 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.034 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.294 nvme0n1 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.294 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.554 nvme0n1 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: ]] 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.554 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.555 10:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.127 nvme0n1 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: ]] 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.127 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.698 nvme0n1 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: ]] 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.698 10:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.959 nvme0n1 00:31:08.959 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.959 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.959 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:08.959 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.959 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: ]] 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:09.219 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:09.220 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:09.220 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:09.220 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.220 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.479 nvme0n1 00:31:09.479 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.479 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:09.479 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:09.479 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.479 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.479 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:09.739 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:09.740 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.740 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.740 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.740 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:09.740 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:09.740 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:09.740 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:09.740 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:09.740 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:09.740 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:09.740 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:09.740 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:09.740 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:09.740 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:09.740 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:09.740 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.740 10:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.000 nvme0n1 00:31:10.000 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.000 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.000 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:10.000 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.000 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.000 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.000 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.000 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.000 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.000 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.259 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.259 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:10.259 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:10.259 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:10.259 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:10.259 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:10.259 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:10.259 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:10.259 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:31:10.259 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:31:10.259 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:10.259 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:10.259 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkZjlkZGI0YmI5Y2FmODI1ODczYjAxOTNmZjQzY2I530xs: 00:31:10.259 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: ]] 00:31:10.259 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2FlYzkyNDc1NTk5NjhiODhmZjBhZTBmMTdkOWZhNWY3NTczOTc4OTAyODg5ZDRmMzRmY2I3NTdlYTdmZDA2NwrmdL4=: 00:31:10.259 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:31:10.259 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:10.259 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.260 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.830 nvme0n1 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: ]] 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.830 10:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.399 nvme0n1 00:31:11.400 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.400 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.400 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:11.400 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.400 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: ]] 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.660 10:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.230 nvme0n1 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjM3NTBhYTc3NDYwMGU4ZDcxMzE0OWZmYWZhNGJkZTNkMjk2NzRiOTM3ZTgyNmFiWl/ZQA==: 00:31:12.230 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: ]] 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAxNDNhZTYwMThjNmM3NjI5ODg5ZjA4OWViOTMxYmWK9CPn: 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.231 10:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.172 nvme0n1 00:31:13.172 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.172 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.172 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:13.172 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.172 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.172 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.172 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc2MDIzZDFiNmM2MzlhMDc1ZTkwMWE4YTUwZDczNGVmODljYTc4NjZlOGY2NDE0ZTI2OTQyODAxMWUyOTA1ZnZZDnI=: 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.173 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.743 nvme0n1 00:31:13.743 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.743 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.743 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:13.743 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.743 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.743 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: ]] 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.744 request: 00:31:13.744 { 00:31:13.744 "name": "nvme0", 00:31:13.744 "trtype": "tcp", 00:31:13.744 "traddr": "10.0.0.1", 00:31:13.744 "adrfam": "ipv4", 00:31:13.744 "trsvcid": "4420", 00:31:13.744 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:13.744 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:13.744 "prchk_reftag": false, 00:31:13.744 "prchk_guard": false, 00:31:13.744 "hdgst": false, 00:31:13.744 "ddgst": false, 00:31:13.744 "allow_unrecognized_csi": false, 00:31:13.744 "method": "bdev_nvme_attach_controller", 00:31:13.744 "req_id": 1 00:31:13.744 } 00:31:13.744 Got JSON-RPC error response 00:31:13.744 response: 00:31:13.744 { 00:31:13.744 "code": -5, 00:31:13.744 "message": "Input/output error" 00:31:13.744 } 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.744 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.004 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:31:14.004 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:31:14.004 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:14.004 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:14.004 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:14.004 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.005 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.005 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:14.005 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.005 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:14.005 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:14.005 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:14.005 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:14.005 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:31:14.005 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:14.005 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:14.005 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:14.005 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:14.005 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:14.005 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:14.005 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.005 10:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.005 request: 00:31:14.005 { 00:31:14.005 "name": "nvme0", 00:31:14.005 "trtype": "tcp", 00:31:14.005 "traddr": "10.0.0.1", 00:31:14.005 "adrfam": "ipv4", 00:31:14.005 "trsvcid": "4420", 00:31:14.005 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:14.005 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:14.005 "prchk_reftag": false, 00:31:14.005 "prchk_guard": false, 00:31:14.005 "hdgst": false, 00:31:14.005 "ddgst": false, 00:31:14.005 "dhchap_key": "key2", 00:31:14.005 "allow_unrecognized_csi": false, 00:31:14.005 "method": "bdev_nvme_attach_controller", 00:31:14.005 "req_id": 1 00:31:14.005 } 00:31:14.005 Got JSON-RPC error response 00:31:14.005 response: 00:31:14.005 { 00:31:14.005 "code": -5, 00:31:14.005 "message": "Input/output error" 00:31:14.005 } 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.005 request: 00:31:14.005 { 00:31:14.005 "name": "nvme0", 00:31:14.005 "trtype": "tcp", 00:31:14.005 "traddr": "10.0.0.1", 00:31:14.005 "adrfam": "ipv4", 00:31:14.005 "trsvcid": "4420", 00:31:14.005 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:14.005 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:14.005 "prchk_reftag": false, 00:31:14.005 "prchk_guard": false, 00:31:14.005 "hdgst": false, 00:31:14.005 "ddgst": false, 00:31:14.005 "dhchap_key": "key1", 00:31:14.005 "dhchap_ctrlr_key": "ckey2", 00:31:14.005 "allow_unrecognized_csi": false, 00:31:14.005 "method": "bdev_nvme_attach_controller", 00:31:14.005 "req_id": 1 00:31:14.005 } 00:31:14.005 Got JSON-RPC error response 00:31:14.005 response: 00:31:14.005 { 00:31:14.005 "code": -5, 00:31:14.005 "message": "Input/output error" 00:31:14.005 } 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.005 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.266 nvme0n1 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: ]] 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.266 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.526 request: 00:31:14.526 { 00:31:14.526 "name": "nvme0", 00:31:14.526 "dhchap_key": "key1", 00:31:14.526 "dhchap_ctrlr_key": "ckey2", 00:31:14.526 "method": "bdev_nvme_set_keys", 00:31:14.526 "req_id": 1 00:31:14.526 } 00:31:14.526 Got JSON-RPC error response 00:31:14.526 response: 00:31:14.526 { 00:31:14.526 "code": -13, 00:31:14.526 "message": "Permission denied" 00:31:14.526 } 00:31:14.526 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:14.526 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:31:14.526 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:14.526 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:14.526 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:14.526 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.526 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:14.526 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.526 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.526 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.527 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:31:14.527 10:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:31:15.467 10:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:15.467 10:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:15.467 10:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.467 10:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.467 10:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.467 10:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:31:15.467 10:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:31:16.407 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.408 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:16.408 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.408 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGEyNGZkYjc1MWYzNWY5ZWJiNDUzZDM1ZDY0ZGEwNDMxMzk1ZGIxODFmNzVmNTUzoVuETQ==: 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: ]] 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwYjRjYmI1NzhmMDFjMWU0OTNhMjRjYjVmOWE5ZDE3YTdiYWQ4NGI2MGEwYWVhq3L3DA==: 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.669 nvme0n1 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0NmQ4NWQ3MDVmNzlmYjkyZjg4MDA1OTY0ZWNmYzelXKFy: 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: ]] 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTE5YWVkNmZjYjIxMzlkNTcxMjg0MDIzZjRlMGY4YTcXdVqf: 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.669 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.930 request: 00:31:16.930 { 00:31:16.930 "name": "nvme0", 00:31:16.930 "dhchap_key": "key2", 00:31:16.930 "dhchap_ctrlr_key": "ckey1", 00:31:16.930 "method": "bdev_nvme_set_keys", 00:31:16.930 "req_id": 1 00:31:16.930 } 00:31:16.930 Got JSON-RPC error response 00:31:16.930 response: 00:31:16.930 { 00:31:16.930 "code": -13, 00:31:16.930 "message": "Permission denied" 00:31:16.930 } 00:31:16.930 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:16.930 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:31:16.930 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:16.930 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:16.930 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:16.930 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.930 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:31:16.930 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.930 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.930 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.930 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:31:16.930 10:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:31:17.871 10:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.871 10:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:31:17.871 10:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.871 10:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.871 10:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.871 10:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:31:17.871 10:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:31:17.871 10:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:31:17.871 10:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:31:17.871 10:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:17.871 10:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:31:17.871 10:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:17.871 10:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:31:17.871 10:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:17.871 10:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:17.871 rmmod nvme_tcp 00:31:17.871 rmmod nvme_fabrics 00:31:17.871 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:17.871 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:31:17.871 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:31:17.871 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1177933 ']' 00:31:17.871 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1177933 00:31:17.871 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1177933 ']' 00:31:17.871 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1177933 00:31:17.871 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:31:17.871 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:17.871 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1177933 00:31:18.132 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:18.132 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:18.132 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1177933' 00:31:18.132 killing process with pid 1177933 00:31:18.132 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1177933 00:31:18.132 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1177933 00:31:18.132 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:18.132 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:18.132 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:18.132 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:31:18.132 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:31:18.132 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:18.132 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:31:18.132 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:18.132 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:18.132 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.132 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.132 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.676 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:20.676 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:20.676 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:20.676 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:31:20.676 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:31:20.676 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:31:20.676 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:20.676 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:20.676 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:20.676 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:20.676 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:31:20.676 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:31:20.676 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:23.977 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:23.977 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:23.977 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:23.977 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:23.977 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:23.977 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:23.977 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:23.977 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:23.977 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:23.977 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:23.977 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:23.977 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:23.977 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:23.977 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:23.977 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:23.977 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:23.977 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:24.238 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.kSK /tmp/spdk.key-null.Kpy /tmp/spdk.key-sha256.Wt4 /tmp/spdk.key-sha384.ZNy /tmp/spdk.key-sha512.Ov6 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:31:24.238 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:28.453 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:28.453 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:28.453 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:28.453 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:28.453 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:28.453 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:28.453 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:28.453 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:28.453 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:28.453 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:31:28.453 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:28.453 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:28.453 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:28.453 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:28.453 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:28.453 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:28.453 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:28.453 00:31:28.453 real 1m0.780s 00:31:28.453 user 0m54.521s 00:31:28.453 sys 0m16.169s 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.453 ************************************ 00:31:28.453 END TEST nvmf_auth_host 00:31:28.453 ************************************ 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.453 ************************************ 00:31:28.453 START TEST nvmf_digest 00:31:28.453 ************************************ 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:28.453 * Looking for test storage... 00:31:28.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:31:28.453 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:28.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.454 --rc genhtml_branch_coverage=1 00:31:28.454 --rc genhtml_function_coverage=1 00:31:28.454 --rc genhtml_legend=1 00:31:28.454 --rc geninfo_all_blocks=1 00:31:28.454 --rc geninfo_unexecuted_blocks=1 00:31:28.454 00:31:28.454 ' 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:28.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.454 --rc genhtml_branch_coverage=1 00:31:28.454 --rc genhtml_function_coverage=1 00:31:28.454 --rc genhtml_legend=1 00:31:28.454 --rc geninfo_all_blocks=1 00:31:28.454 --rc geninfo_unexecuted_blocks=1 00:31:28.454 00:31:28.454 ' 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:28.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.454 --rc genhtml_branch_coverage=1 00:31:28.454 --rc genhtml_function_coverage=1 00:31:28.454 --rc genhtml_legend=1 00:31:28.454 --rc geninfo_all_blocks=1 00:31:28.454 --rc geninfo_unexecuted_blocks=1 00:31:28.454 00:31:28.454 ' 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:28.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.454 --rc genhtml_branch_coverage=1 00:31:28.454 --rc genhtml_function_coverage=1 00:31:28.454 --rc genhtml_legend=1 00:31:28.454 --rc geninfo_all_blocks=1 00:31:28.454 --rc geninfo_unexecuted_blocks=1 00:31:28.454 00:31:28.454 ' 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:28.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:31:28.454 10:59:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:36.598 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:36.598 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.598 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:36.599 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:36.599 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:36.599 10:59:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:36.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:36.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:31:36.599 00:31:36.599 --- 10.0.0.2 ping statistics --- 00:31:36.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.599 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:36.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:36.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:31:36.599 00:31:36.599 --- 10.0.0.1 ping statistics --- 00:31:36.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.599 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:36.599 ************************************ 00:31:36.599 START TEST nvmf_digest_clean 00:31:36.599 ************************************ 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1194913 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1194913 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1194913 ']' 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:36.599 10:59:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:36.599 [2024-11-19 10:59:15.190804] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:31:36.599 [2024-11-19 10:59:15.190864] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.599 [2024-11-19 10:59:15.292597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.599 [2024-11-19 10:59:15.342709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:36.599 [2024-11-19 10:59:15.342761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:36.599 [2024-11-19 10:59:15.342769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:36.599 [2024-11-19 10:59:15.342777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:36.599 [2024-11-19 10:59:15.342783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:36.599 [2024-11-19 10:59:15.343579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.861 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:36.861 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:36.861 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:36.861 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:36.861 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:36.861 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.861 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:37.123 null0 00:31:37.123 [2024-11-19 10:59:16.154253] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.123 [2024-11-19 10:59:16.178548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1195063 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1195063 /var/tmp/bperf.sock 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1195063 ']' 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:37.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:37.123 10:59:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:37.123 [2024-11-19 10:59:16.239477] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:31:37.123 [2024-11-19 10:59:16.239546] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195063 ] 00:31:37.384 [2024-11-19 10:59:16.331137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.384 [2024-11-19 10:59:16.383983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.956 10:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:37.956 10:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:37.956 10:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:37.956 10:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:37.956 10:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:38.217 10:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:38.217 10:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:38.788 nvme0n1 00:31:38.788 10:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:38.788 10:59:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:38.788 Running I/O for 2 seconds... 00:31:40.677 19145.00 IOPS, 74.79 MiB/s [2024-11-19T09:59:19.872Z] 19387.00 IOPS, 75.73 MiB/s 00:31:40.677 Latency(us) 00:31:40.677 [2024-11-19T09:59:19.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.677 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:40.677 nvme0n1 : 2.00 19417.89 75.85 0.00 0.00 6585.83 3126.61 20534.61 00:31:40.677 [2024-11-19T09:59:19.872Z] =================================================================================================================== 00:31:40.677 [2024-11-19T09:59:19.872Z] Total : 19417.89 75.85 0.00 0.00 6585.83 3126.61 20534.61 00:31:40.677 { 00:31:40.677 "results": [ 00:31:40.677 { 00:31:40.677 "job": "nvme0n1", 00:31:40.677 "core_mask": "0x2", 00:31:40.677 "workload": "randread", 00:31:40.677 "status": "finished", 00:31:40.677 "queue_depth": 128, 00:31:40.677 "io_size": 4096, 00:31:40.677 "runtime": 2.00341, 00:31:40.677 "iops": 19417.892493298925, 00:31:40.677 "mibps": 75.85114255194893, 00:31:40.677 "io_failed": 0, 00:31:40.677 "io_timeout": 0, 00:31:40.677 "avg_latency_us": 6585.833195893955, 00:31:40.677 "min_latency_us": 3126.6133333333332, 00:31:40.677 "max_latency_us": 20534.613333333335 00:31:40.677 } 00:31:40.677 ], 00:31:40.677 "core_count": 1 00:31:40.677 } 00:31:40.677 10:59:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:40.677 10:59:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:40.677 10:59:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:40.677 10:59:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:40.677 | select(.opcode=="crc32c") 00:31:40.677 | "\(.module_name) \(.executed)"' 00:31:40.677 10:59:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:40.938 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:40.938 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:40.938 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:40.938 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:40.938 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1195063 00:31:40.938 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1195063 ']' 00:31:40.938 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1195063 00:31:40.938 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:40.938 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:40.938 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1195063 00:31:40.938 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:40.938 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:40.938 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1195063' 00:31:40.938 killing process with pid 1195063 00:31:40.938 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1195063 00:31:40.938 Received shutdown signal, test time was about 2.000000 seconds 00:31:40.938 00:31:40.938 Latency(us) 00:31:40.938 [2024-11-19T09:59:20.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.938 [2024-11-19T09:59:20.133Z] =================================================================================================================== 00:31:40.938 [2024-11-19T09:59:20.133Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:40.938 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1195063 00:31:41.198 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:31:41.198 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:41.198 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:41.198 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:31:41.198 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:41.198 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:41.198 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:41.198 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1195922 00:31:41.198 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1195922 /var/tmp/bperf.sock 00:31:41.198 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1195922 ']' 00:31:41.198 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:41.198 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:41.198 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:41.198 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:41.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:41.198 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:41.198 10:59:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:41.198 [2024-11-19 10:59:20.234559] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:31:41.198 [2024-11-19 10:59:20.234616] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195922 ] 00:31:41.198 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:41.198 Zero copy mechanism will not be used. 00:31:41.198 [2024-11-19 10:59:20.317266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.198 [2024-11-19 10:59:20.346373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:42.140 10:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:42.140 10:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:42.140 10:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:42.140 10:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:42.140 10:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:42.140 10:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:42.140 10:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:42.713 nvme0n1 00:31:42.713 10:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:42.713 10:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:42.713 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:42.713 Zero copy mechanism will not be used. 00:31:42.713 Running I/O for 2 seconds... 00:31:44.599 3266.00 IOPS, 408.25 MiB/s [2024-11-19T09:59:23.794Z] 3617.50 IOPS, 452.19 MiB/s 00:31:44.599 Latency(us) 00:31:44.599 [2024-11-19T09:59:23.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:44.599 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:44.599 nvme0n1 : 2.01 3615.44 451.93 0.00 0.00 4422.68 648.53 12779.52 00:31:44.599 [2024-11-19T09:59:23.794Z] =================================================================================================================== 00:31:44.599 [2024-11-19T09:59:23.794Z] Total : 3615.44 451.93 0.00 0.00 4422.68 648.53 12779.52 00:31:44.599 { 00:31:44.599 "results": [ 00:31:44.599 { 00:31:44.599 "job": "nvme0n1", 00:31:44.599 "core_mask": "0x2", 00:31:44.599 "workload": "randread", 00:31:44.599 "status": "finished", 00:31:44.599 "queue_depth": 16, 00:31:44.599 "io_size": 131072, 00:31:44.599 "runtime": 2.005563, 00:31:44.599 "iops": 3615.443643505589, 00:31:44.599 "mibps": 451.93045543819863, 00:31:44.599 "io_failed": 0, 00:31:44.599 "io_timeout": 0, 00:31:44.599 "avg_latency_us": 4422.68344412265, 00:31:44.599 "min_latency_us": 648.5333333333333, 00:31:44.599 "max_latency_us": 12779.52 00:31:44.599 } 00:31:44.599 ], 00:31:44.599 "core_count": 1 00:31:44.599 } 00:31:44.599 10:59:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:44.599 10:59:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:44.860 10:59:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:44.860 10:59:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:44.860 | select(.opcode=="crc32c") 00:31:44.860 | "\(.module_name) \(.executed)"' 00:31:44.860 10:59:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:44.860 10:59:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:44.860 10:59:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:44.861 10:59:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:44.861 10:59:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:44.861 10:59:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1195922 00:31:44.861 10:59:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1195922 ']' 00:31:44.861 10:59:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1195922 00:31:44.861 10:59:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:44.861 10:59:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:44.861 10:59:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1195922 00:31:44.861 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:44.861 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:44.861 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1195922' 00:31:44.861 killing process with pid 1195922 00:31:44.861 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1195922 00:31:44.861 Received shutdown signal, test time was about 2.000000 seconds 00:31:44.861 00:31:44.861 Latency(us) 00:31:44.861 [2024-11-19T09:59:24.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:44.861 [2024-11-19T09:59:24.056Z] =================================================================================================================== 00:31:44.861 [2024-11-19T09:59:24.056Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:45.121 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1195922 00:31:45.121 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:31:45.121 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:45.121 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:45.121 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:45.121 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:31:45.121 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:31:45.121 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:45.121 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1196628 00:31:45.121 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1196628 /var/tmp/bperf.sock 00:31:45.121 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1196628 ']' 00:31:45.121 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:45.121 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:45.121 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:45.121 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:45.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:45.121 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:45.121 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:45.121 [2024-11-19 10:59:24.203533] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:31:45.122 [2024-11-19 10:59:24.203591] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196628 ] 00:31:45.122 [2024-11-19 10:59:24.285954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.122 [2024-11-19 10:59:24.315331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:46.075 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:46.075 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:46.075 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:46.075 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:46.075 10:59:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:46.075 10:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:46.075 10:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:46.336 nvme0n1 00:31:46.336 10:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:46.336 10:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:46.336 Running I/O for 2 seconds... 00:31:48.659 30269.00 IOPS, 118.24 MiB/s [2024-11-19T09:59:27.854Z] 30399.50 IOPS, 118.75 MiB/s 00:31:48.659 Latency(us) 00:31:48.659 [2024-11-19T09:59:27.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.659 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:48.659 nvme0n1 : 2.01 30406.63 118.78 0.00 0.00 4204.08 2266.45 13817.17 00:31:48.659 [2024-11-19T09:59:27.854Z] =================================================================================================================== 00:31:48.659 [2024-11-19T09:59:27.854Z] Total : 30406.63 118.78 0.00 0.00 4204.08 2266.45 13817.17 00:31:48.659 { 00:31:48.659 "results": [ 00:31:48.659 { 00:31:48.659 "job": "nvme0n1", 00:31:48.659 "core_mask": "0x2", 00:31:48.660 "workload": "randwrite", 00:31:48.660 "status": "finished", 00:31:48.660 "queue_depth": 128, 00:31:48.660 "io_size": 4096, 00:31:48.660 "runtime": 2.005747, 00:31:48.660 "iops": 30406.62655858391, 00:31:48.660 "mibps": 118.7758849944684, 00:31:48.660 "io_failed": 0, 00:31:48.660 "io_timeout": 0, 00:31:48.660 "avg_latency_us": 4204.075755230537, 00:31:48.660 "min_latency_us": 2266.4533333333334, 00:31:48.660 "max_latency_us": 13817.173333333334 00:31:48.660 } 00:31:48.660 ], 00:31:48.660 "core_count": 1 00:31:48.660 } 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:48.660 | select(.opcode=="crc32c") 00:31:48.660 | "\(.module_name) \(.executed)"' 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1196628 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1196628 ']' 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1196628 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196628 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196628' 00:31:48.660 killing process with pid 1196628 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1196628 00:31:48.660 Received shutdown signal, test time was about 2.000000 seconds 00:31:48.660 00:31:48.660 Latency(us) 00:31:48.660 [2024-11-19T09:59:27.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.660 [2024-11-19T09:59:27.855Z] =================================================================================================================== 00:31:48.660 [2024-11-19T09:59:27.855Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:48.660 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1196628 00:31:48.921 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:31:48.921 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:48.921 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:48.921 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:48.921 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:48.921 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:48.921 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:48.921 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1197313 00:31:48.921 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1197313 /var/tmp/bperf.sock 00:31:48.921 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1197313 ']' 00:31:48.921 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:48.921 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:48.921 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:48.921 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:48.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:48.921 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:48.921 10:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:48.921 [2024-11-19 10:59:27.957662] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:31:48.921 [2024-11-19 10:59:27.957719] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197313 ] 00:31:48.921 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:48.921 Zero copy mechanism will not be used. 00:31:48.921 [2024-11-19 10:59:28.039094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.921 [2024-11-19 10:59:28.067984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.862 10:59:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.862 10:59:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:49.862 10:59:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:49.862 10:59:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:49.862 10:59:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:49.862 10:59:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:49.862 10:59:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:50.123 nvme0n1 00:31:50.123 10:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:50.123 10:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:50.123 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:50.123 Zero copy mechanism will not be used. 00:31:50.123 Running I/O for 2 seconds... 00:31:52.449 4088.00 IOPS, 511.00 MiB/s [2024-11-19T09:59:31.644Z] 4561.50 IOPS, 570.19 MiB/s 00:31:52.449 Latency(us) 00:31:52.449 [2024-11-19T09:59:31.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:52.449 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:52.449 nvme0n1 : 2.01 4557.24 569.65 0.00 0.00 3504.38 1235.63 12834.13 00:31:52.449 [2024-11-19T09:59:31.644Z] =================================================================================================================== 00:31:52.449 [2024-11-19T09:59:31.644Z] Total : 4557.24 569.65 0.00 0.00 3504.38 1235.63 12834.13 00:31:52.449 { 00:31:52.449 "results": [ 00:31:52.449 { 00:31:52.449 "job": "nvme0n1", 00:31:52.449 "core_mask": "0x2", 00:31:52.449 "workload": "randwrite", 00:31:52.449 "status": "finished", 00:31:52.449 "queue_depth": 16, 00:31:52.449 "io_size": 131072, 00:31:52.449 "runtime": 2.005382, 00:31:52.449 "iops": 4557.236476641358, 00:31:52.449 "mibps": 569.6545595801698, 00:31:52.449 "io_failed": 0, 00:31:52.449 "io_timeout": 0, 00:31:52.449 "avg_latency_us": 3504.375101579312, 00:31:52.449 "min_latency_us": 1235.6266666666668, 00:31:52.449 "max_latency_us": 12834.133333333333 00:31:52.449 } 00:31:52.449 ], 00:31:52.449 "core_count": 1 00:31:52.449 } 00:31:52.449 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:52.449 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:52.449 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:52.449 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:52.449 | select(.opcode=="crc32c") 00:31:52.449 | "\(.module_name) \(.executed)"' 00:31:52.449 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:52.449 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:52.449 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:52.449 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:52.449 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:52.449 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1197313 00:31:52.449 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1197313 ']' 00:31:52.449 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1197313 00:31:52.449 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:52.449 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:52.449 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1197313 00:31:52.449 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:52.449 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:52.449 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1197313' 00:31:52.449 killing process with pid 1197313 00:31:52.449 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1197313 00:31:52.449 Received shutdown signal, test time was about 2.000000 seconds 00:31:52.449 00:31:52.449 Latency(us) 00:31:52.449 [2024-11-19T09:59:31.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:52.450 [2024-11-19T09:59:31.645Z] =================================================================================================================== 00:31:52.450 [2024-11-19T09:59:31.645Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:52.450 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1197313 00:31:52.710 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1194913 00:31:52.710 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1194913 ']' 00:31:52.711 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1194913 00:31:52.711 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:52.711 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:52.711 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1194913 00:31:52.711 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:52.711 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:52.711 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1194913' 00:31:52.711 killing process with pid 1194913 00:31:52.711 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1194913 00:31:52.711 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1194913 00:31:52.711 00:31:52.711 real 0m16.757s 00:31:52.711 user 0m33.184s 00:31:52.711 sys 0m3.665s 00:31:52.711 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.711 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:52.711 ************************************ 00:31:52.711 END TEST nvmf_digest_clean 00:31:52.711 ************************************ 00:31:52.971 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:31:52.971 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:52.971 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:52.972 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:52.972 ************************************ 00:31:52.972 START TEST nvmf_digest_error 00:31:52.972 ************************************ 00:31:52.972 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:31:52.972 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:31:52.972 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:52.972 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:52.972 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:52.972 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1198086 00:31:52.972 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1198086 00:31:52.972 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:52.972 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1198086 ']' 00:31:52.972 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:52.972 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:52.972 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:52.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:52.972 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:52.972 10:59:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:52.972 [2024-11-19 10:59:32.021957] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:31:52.972 [2024-11-19 10:59:32.022015] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:52.972 [2024-11-19 10:59:32.114962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.972 [2024-11-19 10:59:32.148393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:52.972 [2024-11-19 10:59:32.148426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:52.972 [2024-11-19 10:59:32.148432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:52.972 [2024-11-19 10:59:32.148437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:52.972 [2024-11-19 10:59:32.148441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:52.972 [2024-11-19 10:59:32.148934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.938 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:53.938 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:53.938 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:53.938 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:53.938 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:53.938 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:53.938 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:53.939 [2024-11-19 10:59:32.846858] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:53.939 null0 00:31:53.939 [2024-11-19 10:59:32.924686] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:53.939 [2024-11-19 10:59:32.948863] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1198372 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1198372 /var/tmp/bperf.sock 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1198372 ']' 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:53.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:53.939 10:59:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:53.939 [2024-11-19 10:59:33.006465] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:31:53.939 [2024-11-19 10:59:33.006511] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198372 ] 00:31:53.939 [2024-11-19 10:59:33.087932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.254 [2024-11-19 10:59:33.117519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.938 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:54.938 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:54.938 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:54.938 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:54.938 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:54.938 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.938 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:54.938 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.938 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:54.938 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:55.224 nvme0n1 00:31:55.224 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:55.224 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.224 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:55.224 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.224 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:55.224 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:55.485 Running I/O for 2 seconds... 00:31:55.485 [2024-11-19 10:59:34.472506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.472538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.472547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.483300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.483319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.483326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.494632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.494650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.494657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.502295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.502313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.502319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.512810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.512829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.512836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.520668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.520685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.520691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.530232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.530249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.530255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.540368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.540385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.540391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.548727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.548744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.548750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.558540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.558557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.558564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.566628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.566644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.566650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.576357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.576374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.576383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.584517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.584534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.584540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.593055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.593071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.593077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.602537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.602553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.602560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.611233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.611251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.611258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.620331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.620348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.620354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.630321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.630338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.630344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.638817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.638833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.638840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.647822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.647839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.647845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.656703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.656720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.656726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.664932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.485 [2024-11-19 10:59:34.664949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.485 [2024-11-19 10:59:34.664955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.485 [2024-11-19 10:59:34.674191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.486 [2024-11-19 10:59:34.674208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.486 [2024-11-19 10:59:34.674215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.684734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.684752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.684758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.694017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.694034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.694041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.702687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.702703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.702709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.710391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.710407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.710414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.721754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.721771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.721777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.731853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.731870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.731880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.740938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.740954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.740961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.749467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.749483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.749490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.758839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.758855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.758862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.767339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.767356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.767362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.776352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.776368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.776375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.785215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.785232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.785238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.794253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.794269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.794275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.802821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.802837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.802843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.812919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.812939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.812945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.822249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.822265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.822271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.831444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.831460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.831466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.840132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.840148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.840155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.849364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.849380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.849387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.858915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.858932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.858938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.867702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.867718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.867724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.877007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.877023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.877029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.886712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.886729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.886735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.896716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.896733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.896739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.908213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.908230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.908237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.919843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.747 [2024-11-19 10:59:34.919860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.747 [2024-11-19 10:59:34.919866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.747 [2024-11-19 10:59:34.930163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.748 [2024-11-19 10:59:34.930179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.748 [2024-11-19 10:59:34.930185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.748 [2024-11-19 10:59:34.938693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:55.748 [2024-11-19 10:59:34.938709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.748 [2024-11-19 10:59:34.938716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.008 [2024-11-19 10:59:34.949030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.008 [2024-11-19 10:59:34.949046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-19 10:59:34.949052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.008 [2024-11-19 10:59:34.959052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.008 [2024-11-19 10:59:34.959068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-19 10:59:34.959074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.008 [2024-11-19 10:59:34.968045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.008 [2024-11-19 10:59:34.968062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-19 10:59:34.968069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.008 [2024-11-19 10:59:34.975698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.008 [2024-11-19 10:59:34.975715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-19 10:59:34.975725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.008 [2024-11-19 10:59:34.986439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.008 [2024-11-19 10:59:34.986455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-19 10:59:34.986462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.008 [2024-11-19 10:59:34.995047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.008 [2024-11-19 10:59:34.995063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-19 10:59:34.995069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.008 [2024-11-19 10:59:35.003665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.008 [2024-11-19 10:59:35.003681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-19 10:59:35.003688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.008 [2024-11-19 10:59:35.012961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.008 [2024-11-19 10:59:35.012978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-19 10:59:35.012984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.008 [2024-11-19 10:59:35.022009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.008 [2024-11-19 10:59:35.022025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-19 10:59:35.022032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.008 [2024-11-19 10:59:35.031719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.008 [2024-11-19 10:59:35.031735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-19 10:59:35.031742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.008 [2024-11-19 10:59:35.041027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.008 [2024-11-19 10:59:35.041043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-19 10:59:35.041050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.008 [2024-11-19 10:59:35.049605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.008 [2024-11-19 10:59:35.049621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-19 10:59:35.049628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.009 [2024-11-19 10:59:35.057973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.009 [2024-11-19 10:59:35.057993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.009 [2024-11-19 10:59:35.057999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.009 [2024-11-19 10:59:35.066794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.009 [2024-11-19 10:59:35.066811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.009 [2024-11-19 10:59:35.066817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.009 [2024-11-19 10:59:35.077107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.009 [2024-11-19 10:59:35.077122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.009 [2024-11-19 10:59:35.077129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.009 [2024-11-19 10:59:35.085507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.009 [2024-11-19 10:59:35.085523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.009 [2024-11-19 10:59:35.085529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.009 [2024-11-19 10:59:35.094812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.009 [2024-11-19 10:59:35.094828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.009 [2024-11-19 10:59:35.094834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.009 [2024-11-19 10:59:35.105748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.009 [2024-11-19 10:59:35.105764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.009 [2024-11-19 10:59:35.105771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.009 [2024-11-19 10:59:35.116814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.009 [2024-11-19 10:59:35.116831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.009 [2024-11-19 10:59:35.116838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.009 [2024-11-19 10:59:35.124954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.009 [2024-11-19 10:59:35.124971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.009 [2024-11-19 10:59:35.124977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.009 [2024-11-19 10:59:35.136139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.009 [2024-11-19 10:59:35.136155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.009 [2024-11-19 10:59:35.136165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.009 [2024-11-19 10:59:35.148112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.009 [2024-11-19 10:59:35.148129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.009 [2024-11-19 10:59:35.148135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.009 [2024-11-19 10:59:35.155464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.009 [2024-11-19 10:59:35.155481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.009 [2024-11-19 10:59:35.155487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.009 [2024-11-19 10:59:35.165302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.009 [2024-11-19 10:59:35.165318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.009 [2024-11-19 10:59:35.165324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.009 [2024-11-19 10:59:35.174128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.009 [2024-11-19 10:59:35.174144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.009 [2024-11-19 10:59:35.174151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.009 [2024-11-19 10:59:35.182925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.009 [2024-11-19 10:59:35.182941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.009 [2024-11-19 10:59:35.182948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.009 [2024-11-19 10:59:35.191383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.009 [2024-11-19 10:59:35.191399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.009 [2024-11-19 10:59:35.191405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.009 [2024-11-19 10:59:35.201352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.009 [2024-11-19 10:59:35.201369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.009 [2024-11-19 10:59:35.201375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.269 [2024-11-19 10:59:35.209777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.269 [2024-11-19 10:59:35.209795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.269 [2024-11-19 10:59:35.209801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.269 [2024-11-19 10:59:35.221093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.269 [2024-11-19 10:59:35.221110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.269 [2024-11-19 10:59:35.221119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.269 [2024-11-19 10:59:35.232962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.269 [2024-11-19 10:59:35.232978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.269 [2024-11-19 10:59:35.232985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.269 [2024-11-19 10:59:35.240988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.269 [2024-11-19 10:59:35.241004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.269 [2024-11-19 10:59:35.241011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.269 [2024-11-19 10:59:35.252575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.269 [2024-11-19 10:59:35.252591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.269 [2024-11-19 10:59:35.252598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.269 [2024-11-19 10:59:35.264297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.269 [2024-11-19 10:59:35.264313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.269 [2024-11-19 10:59:35.264320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 [2024-11-19 10:59:35.274245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.274261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.274268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 [2024-11-19 10:59:35.283864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.283879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.283886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 [2024-11-19 10:59:35.292253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.292270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.292276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 [2024-11-19 10:59:35.301172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.301189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.301195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 [2024-11-19 10:59:35.310044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.310060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.310066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 [2024-11-19 10:59:35.319001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.319017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.319024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 [2024-11-19 10:59:35.329287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.329304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.329310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 [2024-11-19 10:59:35.340235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.340252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.340258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 [2024-11-19 10:59:35.351019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.351036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.351042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 [2024-11-19 10:59:35.358976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.358992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.358998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 [2024-11-19 10:59:35.370630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.370648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.370654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 [2024-11-19 10:59:35.382689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.382706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.382712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 [2024-11-19 10:59:35.394111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.394129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.394138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 [2024-11-19 10:59:35.406402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.406419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.406425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 [2024-11-19 10:59:35.414417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.414433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.414439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 [2024-11-19 10:59:35.424261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.424277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.424284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 [2024-11-19 10:59:35.433466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.433482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.433489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 [2024-11-19 10:59:35.442187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.442203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.442210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 [2024-11-19 10:59:35.453554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.453571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.453577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.270 26538.00 IOPS, 103.66 MiB/s [2024-11-19T09:59:35.465Z] [2024-11-19 10:59:35.462852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.270 [2024-11-19 10:59:35.462869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.270 [2024-11-19 10:59:35.462875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.473793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.473810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.473817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.485118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.485138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.485144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.492741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.492758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.492764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.503091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.503108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.503114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.515038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.515056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.515062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.526806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.526823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.526830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.536798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.536815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.536821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.544670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.544686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.544692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.553472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.553488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.553494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.562717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.562733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.562740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.573194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.573211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.573217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.583963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.583980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.583986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.596543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.596560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.596567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.607823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.607840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.607846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.616087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.616104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.616111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.624846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.624863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.624869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.634397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.634414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.634421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.645118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.645136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.645143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.654340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.654356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.654369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.663048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.532 [2024-11-19 10:59:35.663065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.532 [2024-11-19 10:59:35.663072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.532 [2024-11-19 10:59:35.672707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.533 [2024-11-19 10:59:35.672724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.533 [2024-11-19 10:59:35.672730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.533 [2024-11-19 10:59:35.682139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.533 [2024-11-19 10:59:35.682155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.533 [2024-11-19 10:59:35.682165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.533 [2024-11-19 10:59:35.690669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.533 [2024-11-19 10:59:35.690686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.533 [2024-11-19 10:59:35.690692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.533 [2024-11-19 10:59:35.700263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.533 [2024-11-19 10:59:35.700280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.533 [2024-11-19 10:59:35.700286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.533 [2024-11-19 10:59:35.709038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.533 [2024-11-19 10:59:35.709054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.533 [2024-11-19 10:59:35.709061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.533 [2024-11-19 10:59:35.718374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.533 [2024-11-19 10:59:35.718391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.533 [2024-11-19 10:59:35.718398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.793 [2024-11-19 10:59:35.727567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.793 [2024-11-19 10:59:35.727584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.793 [2024-11-19 10:59:35.727590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.793 [2024-11-19 10:59:35.737197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.793 [2024-11-19 10:59:35.737216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.793 [2024-11-19 10:59:35.737223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.793 [2024-11-19 10:59:35.745293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.793 [2024-11-19 10:59:35.745310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.793 [2024-11-19 10:59:35.745316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.793 [2024-11-19 10:59:35.755931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.793 [2024-11-19 10:59:35.755947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.755953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.766967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.766983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.766990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.775380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.775397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.775403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.782857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.782874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.782880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.793124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.793141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.793147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.802981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.802998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.803005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.810562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.810578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.810584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.820235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.820252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.820258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.828728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.828745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.828752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.836942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.836960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.836966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.846632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.846649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.846655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.855208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.855224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.855231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.863734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.863750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.863757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.872960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.872976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.872982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.881498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.881514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.881520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.890493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.890509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.890518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.898620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.898637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.898644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.908386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.908403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.908409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.917767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.917783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.917790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.929354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.929371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.929378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.938058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.938074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.938081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.946496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.946513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.946519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.955962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.955978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.955985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.965514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.965530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.965537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.973070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.973088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.973094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.794 [2024-11-19 10:59:35.983305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:56.794 [2024-11-19 10:59:35.983321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.794 [2024-11-19 10:59:35.983327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.055 [2024-11-19 10:59:35.992624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.055 [2024-11-19 10:59:35.992640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.055 [2024-11-19 10:59:35.992647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.055 [2024-11-19 10:59:36.000570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.055 [2024-11-19 10:59:36.000587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.055 [2024-11-19 10:59:36.000593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.055 [2024-11-19 10:59:36.010826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.055 [2024-11-19 10:59:36.010843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.055 [2024-11-19 10:59:36.010850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.055 [2024-11-19 10:59:36.020963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.055 [2024-11-19 10:59:36.020980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.055 [2024-11-19 10:59:36.020987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.055 [2024-11-19 10:59:36.031678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.055 [2024-11-19 10:59:36.031695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.055 [2024-11-19 10:59:36.031701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.040152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.040173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.040180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.050255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.050272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.050281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.060490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.060506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.060512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.067935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.067951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.067957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.077596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.077612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.077618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.085468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.085485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.085491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.097276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.097293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.097299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.105088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.105104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.105111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.114285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.114301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.114307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.124693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.124709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.124716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.133063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.133082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.133089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.141157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.141176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.141183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.151432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.151448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.151455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.159243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.159258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.159264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.168832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.168848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.168855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.177401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.177417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.177423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.186992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.187009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.187015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.195964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.195981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.195989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.204651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.204668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.204674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.213694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.213710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.213717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.223005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.223021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.223028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.231629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.231646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.231652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.056 [2024-11-19 10:59:36.240487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.056 [2024-11-19 10:59:36.240504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.056 [2024-11-19 10:59:36.240510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.250023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.250040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.250046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.259052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.259068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.259074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.267389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.267405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.267412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.276298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.276315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.276321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.285561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.285577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.285586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.296447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.296463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.296470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.304535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.304552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.304558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.313806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.313823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.313829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.323528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.323545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.323551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.335404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.335420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.335426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.343233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.343249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.343256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.352946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.352962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.352968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.363321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.363337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.363343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.372598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.372617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.372623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.381061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.381076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.381082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.390127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.390143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.390149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.398538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.398554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.398560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.407814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.407831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.407837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.415880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.415896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.415902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.424480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.424496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.424503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.434085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.318 [2024-11-19 10:59:36.434101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.318 [2024-11-19 10:59:36.434107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.318 [2024-11-19 10:59:36.445424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.319 [2024-11-19 10:59:36.445441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.319 [2024-11-19 10:59:36.445447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.319 26891.00 IOPS, 105.04 MiB/s [2024-11-19T09:59:36.514Z] [2024-11-19 10:59:36.458604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9e5c0) 00:31:57.319 [2024-11-19 10:59:36.458620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.319 [2024-11-19 10:59:36.458626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.319 00:31:57.319 Latency(us) 00:31:57.319 [2024-11-19T09:59:36.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.319 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:57.319 nvme0n1 : 2.00 26901.68 105.08 0.00 0.00 4752.92 2239.15 17913.17 00:31:57.319 [2024-11-19T09:59:36.514Z] =================================================================================================================== 00:31:57.319 [2024-11-19T09:59:36.514Z] Total : 26901.68 105.08 0.00 0.00 4752.92 2239.15 17913.17 00:31:57.319 { 00:31:57.319 "results": [ 00:31:57.319 { 00:31:57.319 "job": "nvme0n1", 00:31:57.319 "core_mask": "0x2", 00:31:57.319 "workload": "randread", 00:31:57.319 "status": "finished", 00:31:57.319 "queue_depth": 128, 00:31:57.319 "io_size": 4096, 00:31:57.319 "runtime": 2.003964, 00:31:57.319 "iops": 26901.680868518597, 00:31:57.319 "mibps": 105.08469089265077, 00:31:57.319 "io_failed": 0, 00:31:57.319 "io_timeout": 0, 00:31:57.319 "avg_latency_us": 4752.915406170779, 00:31:57.319 "min_latency_us": 2239.1466666666665, 00:31:57.319 "max_latency_us": 17913.173333333332 00:31:57.319 } 00:31:57.319 ], 00:31:57.319 "core_count": 1 00:31:57.319 } 00:31:57.319 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:57.319 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:57.319 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:57.319 | .driver_specific 00:31:57.319 | .nvme_error 00:31:57.319 | .status_code 00:31:57.319 | .command_transient_transport_error' 00:31:57.319 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:57.579 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 211 > 0 )) 00:31:57.579 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1198372 00:31:57.579 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1198372 ']' 00:31:57.579 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1198372 00:31:57.579 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:57.579 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:57.579 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1198372 00:31:57.579 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:57.579 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:57.579 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1198372' 00:31:57.579 killing process with pid 1198372 00:31:57.579 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1198372 00:31:57.579 Received shutdown signal, test time was about 2.000000 seconds 00:31:57.579 00:31:57.579 Latency(us) 00:31:57.579 [2024-11-19T09:59:36.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.579 [2024-11-19T09:59:36.774Z] =================================================================================================================== 00:31:57.579 [2024-11-19T09:59:36.774Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:57.579 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1198372 00:31:57.839 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:31:57.839 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:57.839 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:57.839 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:57.839 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:57.839 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1199071 00:31:57.839 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1199071 /var/tmp/bperf.sock 00:31:57.839 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1199071 ']' 00:31:57.839 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:57.839 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:57.839 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:57.839 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:57.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:57.839 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:57.839 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:57.839 [2024-11-19 10:59:36.896535] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:31:57.839 [2024-11-19 10:59:36.896591] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199071 ] 00:31:57.839 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:57.839 Zero copy mechanism will not be used. 00:31:57.839 [2024-11-19 10:59:36.979657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.839 [2024-11-19 10:59:37.009110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:58.781 10:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:58.781 10:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:58.781 10:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:58.781 10:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:58.781 10:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:58.781 10:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.781 10:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:58.781 10:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.781 10:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:58.781 10:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:59.042 nvme0n1 00:31:59.042 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:59.042 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.042 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:59.042 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.042 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:59.042 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:59.042 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:59.042 Zero copy mechanism will not be used. 00:31:59.042 Running I/O for 2 seconds... 00:31:59.042 [2024-11-19 10:59:38.213063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.042 [2024-11-19 10:59:38.213096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.042 [2024-11-19 10:59:38.213104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.042 [2024-11-19 10:59:38.224293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.042 [2024-11-19 10:59:38.224315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.042 [2024-11-19 10:59:38.224322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.042 [2024-11-19 10:59:38.235559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.042 [2024-11-19 10:59:38.235579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.042 [2024-11-19 10:59:38.235586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.246239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.246258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.246265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.257172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.257190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.257197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.269742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.269761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.269767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.282603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.282628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.282634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.295926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.295944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.295950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.308440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.308458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.308465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.321400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.321418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.321424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.333514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.333532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.333538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.346099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.346117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.346123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.358248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.358266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.358273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.370759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.370777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.370783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.382590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.382608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.382614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.391435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.391453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.391460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.401237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.401261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.401268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.412345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.412362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.412369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.424395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.424412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.424419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.436175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.436192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.436198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.445891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.445909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.445916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.457868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.457886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.457892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.469915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.469932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.469938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.475836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.475853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.475863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.484784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.484801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.304 [2024-11-19 10:59:38.484807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.304 [2024-11-19 10:59:38.496558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.304 [2024-11-19 10:59:38.496576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.305 [2024-11-19 10:59:38.496583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.566 [2024-11-19 10:59:38.507805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.566 [2024-11-19 10:59:38.507823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.566 [2024-11-19 10:59:38.507829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.566 [2024-11-19 10:59:38.519745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.566 [2024-11-19 10:59:38.519762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.566 [2024-11-19 10:59:38.519769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.566 [2024-11-19 10:59:38.529010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.566 [2024-11-19 10:59:38.529028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.566 [2024-11-19 10:59:38.529034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.566 [2024-11-19 10:59:38.539541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.566 [2024-11-19 10:59:38.539558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.566 [2024-11-19 10:59:38.539565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.566 [2024-11-19 10:59:38.550771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.566 [2024-11-19 10:59:38.550788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.566 [2024-11-19 10:59:38.550794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.566 [2024-11-19 10:59:38.563400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.566 [2024-11-19 10:59:38.563418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.566 [2024-11-19 10:59:38.563424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.566 [2024-11-19 10:59:38.575336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.566 [2024-11-19 10:59:38.575356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.566 [2024-11-19 10:59:38.575363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.566 [2024-11-19 10:59:38.586317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.566 [2024-11-19 10:59:38.586335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.566 [2024-11-19 10:59:38.586341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.567 [2024-11-19 10:59:38.597666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.567 [2024-11-19 10:59:38.597684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.567 [2024-11-19 10:59:38.597690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.567 [2024-11-19 10:59:38.608610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.567 [2024-11-19 10:59:38.608627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.567 [2024-11-19 10:59:38.608633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.567 [2024-11-19 10:59:38.617101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.567 [2024-11-19 10:59:38.617118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.567 [2024-11-19 10:59:38.617125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.567 [2024-11-19 10:59:38.622450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.567 [2024-11-19 10:59:38.622468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.567 [2024-11-19 10:59:38.622474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.567 [2024-11-19 10:59:38.629734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.567 [2024-11-19 10:59:38.629752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.567 [2024-11-19 10:59:38.629758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.567 [2024-11-19 10:59:38.638392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.567 [2024-11-19 10:59:38.638409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.567 [2024-11-19 10:59:38.638415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.567 [2024-11-19 10:59:38.648408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.567 [2024-11-19 10:59:38.648426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.567 [2024-11-19 10:59:38.648432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.567 [2024-11-19 10:59:38.659376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.567 [2024-11-19 10:59:38.659394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.567 [2024-11-19 10:59:38.659400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.567 [2024-11-19 10:59:38.671173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.567 [2024-11-19 10:59:38.671190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.567 [2024-11-19 10:59:38.671196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.567 [2024-11-19 10:59:38.682428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.567 [2024-11-19 10:59:38.682446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.567 [2024-11-19 10:59:38.682452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.567 [2024-11-19 10:59:38.694565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.567 [2024-11-19 10:59:38.694583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.567 [2024-11-19 10:59:38.694589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.567 [2024-11-19 10:59:38.705957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.567 [2024-11-19 10:59:38.705975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.567 [2024-11-19 10:59:38.705981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.567 [2024-11-19 10:59:38.716254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.567 [2024-11-19 10:59:38.716271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.567 [2024-11-19 10:59:38.716277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.567 [2024-11-19 10:59:38.727542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.567 [2024-11-19 10:59:38.727560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.567 [2024-11-19 10:59:38.727566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.567 [2024-11-19 10:59:38.733376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.567 [2024-11-19 10:59:38.733394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.567 [2024-11-19 10:59:38.733401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.567 [2024-11-19 10:59:38.742424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.567 [2024-11-19 10:59:38.742445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.567 [2024-11-19 10:59:38.742451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.567 [2024-11-19 10:59:38.754040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.567 [2024-11-19 10:59:38.754058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.567 [2024-11-19 10:59:38.754064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.829 [2024-11-19 10:59:38.763502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.829 [2024-11-19 10:59:38.763520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.829 [2024-11-19 10:59:38.763526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.829 [2024-11-19 10:59:38.773601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.829 [2024-11-19 10:59:38.773620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.829 [2024-11-19 10:59:38.773626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.829 [2024-11-19 10:59:38.782781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.829 [2024-11-19 10:59:38.782798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.829 [2024-11-19 10:59:38.782804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.829 [2024-11-19 10:59:38.792435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.829 [2024-11-19 10:59:38.792452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.829 [2024-11-19 10:59:38.792458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.829 [2024-11-19 10:59:38.803220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.829 [2024-11-19 10:59:38.803237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.829 [2024-11-19 10:59:38.803244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.829 [2024-11-19 10:59:38.814602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.829 [2024-11-19 10:59:38.814620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.829 [2024-11-19 10:59:38.814627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.829 [2024-11-19 10:59:38.822664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.829 [2024-11-19 10:59:38.822682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.829 [2024-11-19 10:59:38.822688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.829 [2024-11-19 10:59:38.833511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.829 [2024-11-19 10:59:38.833528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.829 [2024-11-19 10:59:38.833534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.829 [2024-11-19 10:59:38.845749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.829 [2024-11-19 10:59:38.845767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.829 [2024-11-19 10:59:38.845774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.829 [2024-11-19 10:59:38.854669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.829 [2024-11-19 10:59:38.854687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.829 [2024-11-19 10:59:38.854694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.829 [2024-11-19 10:59:38.865724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.829 [2024-11-19 10:59:38.865742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.829 [2024-11-19 10:59:38.865748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.829 [2024-11-19 10:59:38.876788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.829 [2024-11-19 10:59:38.876807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.829 [2024-11-19 10:59:38.876813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.829 [2024-11-19 10:59:38.886474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.829 [2024-11-19 10:59:38.886492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.830 [2024-11-19 10:59:38.886499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.830 [2024-11-19 10:59:38.896725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.830 [2024-11-19 10:59:38.896743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.830 [2024-11-19 10:59:38.896749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.830 [2024-11-19 10:59:38.907084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.830 [2024-11-19 10:59:38.907102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.830 [2024-11-19 10:59:38.907109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.830 [2024-11-19 10:59:38.917797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.830 [2024-11-19 10:59:38.917815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.830 [2024-11-19 10:59:38.917825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.830 [2024-11-19 10:59:38.926107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.830 [2024-11-19 10:59:38.926125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.830 [2024-11-19 10:59:38.926131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.830 [2024-11-19 10:59:38.937810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.830 [2024-11-19 10:59:38.937827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.830 [2024-11-19 10:59:38.937833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.830 [2024-11-19 10:59:38.948825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.830 [2024-11-19 10:59:38.948843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.830 [2024-11-19 10:59:38.948849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.830 [2024-11-19 10:59:38.959818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.830 [2024-11-19 10:59:38.959836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.830 [2024-11-19 10:59:38.959842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.830 [2024-11-19 10:59:38.971326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.830 [2024-11-19 10:59:38.971344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.830 [2024-11-19 10:59:38.971350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.830 [2024-11-19 10:59:38.980785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.830 [2024-11-19 10:59:38.980803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.830 [2024-11-19 10:59:38.980810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.830 [2024-11-19 10:59:38.989421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.830 [2024-11-19 10:59:38.989439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.830 [2024-11-19 10:59:38.989445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.830 [2024-11-19 10:59:38.996900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.830 [2024-11-19 10:59:38.996919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.830 [2024-11-19 10:59:38.996925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.830 [2024-11-19 10:59:39.007715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.830 [2024-11-19 10:59:39.007737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.830 [2024-11-19 10:59:39.007743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.830 [2024-11-19 10:59:39.018289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:31:59.830 [2024-11-19 10:59:39.018307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.830 [2024-11-19 10:59:39.018313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.097 [2024-11-19 10:59:39.027151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.097 [2024-11-19 10:59:39.027175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.097 [2024-11-19 10:59:39.027181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.097 [2024-11-19 10:59:39.036368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.097 [2024-11-19 10:59:39.036386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.097 [2024-11-19 10:59:39.036392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.097 [2024-11-19 10:59:39.045873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.097 [2024-11-19 10:59:39.045891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.097 [2024-11-19 10:59:39.045897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.098 [2024-11-19 10:59:39.056747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.098 [2024-11-19 10:59:39.056765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.098 [2024-11-19 10:59:39.056771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.098 [2024-11-19 10:59:39.063913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.098 [2024-11-19 10:59:39.063931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.098 [2024-11-19 10:59:39.063937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.098 [2024-11-19 10:59:39.069618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.098 [2024-11-19 10:59:39.069636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.098 [2024-11-19 10:59:39.069642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.098 [2024-11-19 10:59:39.074852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.098 [2024-11-19 10:59:39.074870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.098 [2024-11-19 10:59:39.074876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.098 [2024-11-19 10:59:39.081372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.098 [2024-11-19 10:59:39.081390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.098 [2024-11-19 10:59:39.081396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.098 [2024-11-19 10:59:39.093086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.098 [2024-11-19 10:59:39.093104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.098 [2024-11-19 10:59:39.093110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.098 [2024-11-19 10:59:39.104281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.098 [2024-11-19 10:59:39.104299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.098 [2024-11-19 10:59:39.104305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.098 [2024-11-19 10:59:39.111723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.098 [2024-11-19 10:59:39.111741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.098 [2024-11-19 10:59:39.111747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.098 [2024-11-19 10:59:39.120361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.099 [2024-11-19 10:59:39.120379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.099 [2024-11-19 10:59:39.120385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.099 [2024-11-19 10:59:39.129018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.099 [2024-11-19 10:59:39.129036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.099 [2024-11-19 10:59:39.129042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.099 [2024-11-19 10:59:39.133079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.099 [2024-11-19 10:59:39.133097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.099 [2024-11-19 10:59:39.133104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.099 [2024-11-19 10:59:39.141164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.099 [2024-11-19 10:59:39.141181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.099 [2024-11-19 10:59:39.141188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.099 [2024-11-19 10:59:39.149018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.099 [2024-11-19 10:59:39.149037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.099 [2024-11-19 10:59:39.149046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.099 [2024-11-19 10:59:39.156865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.100 [2024-11-19 10:59:39.156883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.100 [2024-11-19 10:59:39.156890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.100 [2024-11-19 10:59:39.164872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.100 [2024-11-19 10:59:39.164890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.100 [2024-11-19 10:59:39.164896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.100 [2024-11-19 10:59:39.173847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.100 [2024-11-19 10:59:39.173866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.100 [2024-11-19 10:59:39.173872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.100 [2024-11-19 10:59:39.182678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.100 [2024-11-19 10:59:39.182696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.100 [2024-11-19 10:59:39.182702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.100 [2024-11-19 10:59:39.187250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.100 [2024-11-19 10:59:39.187268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.100 [2024-11-19 10:59:39.187274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.100 [2024-11-19 10:59:39.197172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.106 [2024-11-19 10:59:39.197191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.106 [2024-11-19 10:59:39.197197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.106 3046.00 IOPS, 380.75 MiB/s [2024-11-19T09:59:39.301Z] [2024-11-19 10:59:39.208335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.107 [2024-11-19 10:59:39.208354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.107 [2024-11-19 10:59:39.208360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.107 [2024-11-19 10:59:39.218885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.107 [2024-11-19 10:59:39.218903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.107 [2024-11-19 10:59:39.218910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.107 [2024-11-19 10:59:39.226519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.107 [2024-11-19 10:59:39.226536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.107 [2024-11-19 10:59:39.226543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.107 [2024-11-19 10:59:39.232139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.107 [2024-11-19 10:59:39.232157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.107 [2024-11-19 10:59:39.232168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.107 [2024-11-19 10:59:39.240687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.107 [2024-11-19 10:59:39.240705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.107 [2024-11-19 10:59:39.240711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.107 [2024-11-19 10:59:39.248709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.107 [2024-11-19 10:59:39.248727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.108 [2024-11-19 10:59:39.248733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.108 [2024-11-19 10:59:39.259405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.108 [2024-11-19 10:59:39.259422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.108 [2024-11-19 10:59:39.259428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.108 [2024-11-19 10:59:39.270875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.108 [2024-11-19 10:59:39.270893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.108 [2024-11-19 10:59:39.270899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.108 [2024-11-19 10:59:39.279874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.108 [2024-11-19 10:59:39.279893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.108 [2024-11-19 10:59:39.279899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.108 [2024-11-19 10:59:39.288685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.108 [2024-11-19 10:59:39.288702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.108 [2024-11-19 10:59:39.288708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.291708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.291725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.291734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.295349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.295367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.295373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.300137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.300155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.300174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.308071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.308089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.308095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.316732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.316750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.316756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.321298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.321315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.321322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.330636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.330654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.330660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.335134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.335152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.335163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.342808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.342826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.342832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.352247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.352269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.352275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.358673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.358691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.358697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.368237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.368255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.368261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.380422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.380441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.380447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.391721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.391739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.391745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.402712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.402730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.402736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.413716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.413733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.413740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.425996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.426014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.426020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.437803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.437821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.437828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.443643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.443661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.443667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.450917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.450935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.378 [2024-11-19 10:59:39.450941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.378 [2024-11-19 10:59:39.460242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.378 [2024-11-19 10:59:39.460259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.379 [2024-11-19 10:59:39.460266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.379 [2024-11-19 10:59:39.469110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.379 [2024-11-19 10:59:39.469128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.379 [2024-11-19 10:59:39.469134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.379 [2024-11-19 10:59:39.478443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.379 [2024-11-19 10:59:39.478461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.379 [2024-11-19 10:59:39.478468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.379 [2024-11-19 10:59:39.484446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.379 [2024-11-19 10:59:39.484463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.379 [2024-11-19 10:59:39.484469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.379 [2024-11-19 10:59:39.493710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.379 [2024-11-19 10:59:39.493729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.379 [2024-11-19 10:59:39.493735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.379 [2024-11-19 10:59:39.503495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.379 [2024-11-19 10:59:39.503513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.379 [2024-11-19 10:59:39.503519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.379 [2024-11-19 10:59:39.508777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.379 [2024-11-19 10:59:39.508795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.379 [2024-11-19 10:59:39.508808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.379 [2024-11-19 10:59:39.513834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.379 [2024-11-19 10:59:39.513851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.379 [2024-11-19 10:59:39.513857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.379 [2024-11-19 10:59:39.518199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.379 [2024-11-19 10:59:39.518216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.379 [2024-11-19 10:59:39.518222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.379 [2024-11-19 10:59:39.522998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.379 [2024-11-19 10:59:39.523016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.379 [2024-11-19 10:59:39.523022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.379 [2024-11-19 10:59:39.528439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.379 [2024-11-19 10:59:39.528457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.379 [2024-11-19 10:59:39.528463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.379 [2024-11-19 10:59:39.538220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.379 [2024-11-19 10:59:39.538238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.379 [2024-11-19 10:59:39.538245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.379 [2024-11-19 10:59:39.549327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.379 [2024-11-19 10:59:39.549345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.379 [2024-11-19 10:59:39.549352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.379 [2024-11-19 10:59:39.559189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.379 [2024-11-19 10:59:39.559206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.379 [2024-11-19 10:59:39.559212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.379 [2024-11-19 10:59:39.567194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.379 [2024-11-19 10:59:39.567212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.379 [2024-11-19 10:59:39.567218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.379 [2024-11-19 10:59:39.569834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.379 [2024-11-19 10:59:39.569851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.379 [2024-11-19 10:59:39.569857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.574330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.574347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.574354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.579022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.579039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.579045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.583393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.583410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.583416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.592892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.592909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.592915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.597491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.597508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.597514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.607023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.607040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.607046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.611446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.611462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.611468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.618261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.618278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.618288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.622457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.622475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.622481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.632523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.632541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.632547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.641312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.641329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.641335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.646298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.646314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.646321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.652910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.652927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.652933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.658674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.658691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.658697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.663019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.663037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.663043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.667439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.667456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.667462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.673297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.673320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.673326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.680282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.680299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.680305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.686439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.686456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.686462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.696338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.696354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.696361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.704785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.704803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.704809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.715638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.715655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.642 [2024-11-19 10:59:39.715662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.642 [2024-11-19 10:59:39.725712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.642 [2024-11-19 10:59:39.725729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.643 [2024-11-19 10:59:39.725735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.643 [2024-11-19 10:59:39.733402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.643 [2024-11-19 10:59:39.733419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.643 [2024-11-19 10:59:39.733425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.643 [2024-11-19 10:59:39.740514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.643 [2024-11-19 10:59:39.740531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.643 [2024-11-19 10:59:39.740537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.643 [2024-11-19 10:59:39.748071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.643 [2024-11-19 10:59:39.748089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.643 [2024-11-19 10:59:39.748095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.643 [2024-11-19 10:59:39.752467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.643 [2024-11-19 10:59:39.752485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.643 [2024-11-19 10:59:39.752491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.643 [2024-11-19 10:59:39.757341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.643 [2024-11-19 10:59:39.757359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.643 [2024-11-19 10:59:39.757365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.643 [2024-11-19 10:59:39.766042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.643 [2024-11-19 10:59:39.766059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.643 [2024-11-19 10:59:39.766065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.643 [2024-11-19 10:59:39.770481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.643 [2024-11-19 10:59:39.770500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.643 [2024-11-19 10:59:39.770507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.643 [2024-11-19 10:59:39.781071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.643 [2024-11-19 10:59:39.781089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.643 [2024-11-19 10:59:39.781096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.643 [2024-11-19 10:59:39.793203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.643 [2024-11-19 10:59:39.793221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.643 [2024-11-19 10:59:39.793227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.643 [2024-11-19 10:59:39.804736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.643 [2024-11-19 10:59:39.804753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.643 [2024-11-19 10:59:39.804759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.643 [2024-11-19 10:59:39.816523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.643 [2024-11-19 10:59:39.816540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.643 [2024-11-19 10:59:39.816550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.643 [2024-11-19 10:59:39.824621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.643 [2024-11-19 10:59:39.824639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.643 [2024-11-19 10:59:39.824646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.643 [2024-11-19 10:59:39.830068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.643 [2024-11-19 10:59:39.830085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.643 [2024-11-19 10:59:39.830091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.643 [2024-11-19 10:59:39.835199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.643 [2024-11-19 10:59:39.835215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.643 [2024-11-19 10:59:39.835221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.844222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.844240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.844246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.849395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.849412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.849418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.860112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.860129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.860135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.867738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.867755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.867761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.878721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.878739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.878745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.885443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.885460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.885467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.896277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.896294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.896300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.907134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.907152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.907162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.912308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.912326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.912332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.917089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.917106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.917113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.921378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.921394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.921400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.928777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.928794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.928800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.936545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.936563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.936569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.944094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.944112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.944121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.953279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.953296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.953302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.958707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.958725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.958731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.966430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.966448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.966454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.973768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.973786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.973792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.978166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.978183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.978189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.985225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.985242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.985249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:39.993831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:39.993848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:39.993854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:40.000054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:40.000072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:40.000079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:40.005835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:40.005858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:40.005864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:40.010904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:40.010922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:40.010928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:40.019584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.905 [2024-11-19 10:59:40.019602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.905 [2024-11-19 10:59:40.019608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.905 [2024-11-19 10:59:40.026607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.906 [2024-11-19 10:59:40.026625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.906 [2024-11-19 10:59:40.026631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.906 [2024-11-19 10:59:40.033027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.906 [2024-11-19 10:59:40.033044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.906 [2024-11-19 10:59:40.033051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.906 [2024-11-19 10:59:40.038084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.906 [2024-11-19 10:59:40.038101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.906 [2024-11-19 10:59:40.038108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.906 [2024-11-19 10:59:40.043864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.906 [2024-11-19 10:59:40.043884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.906 [2024-11-19 10:59:40.043890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.906 [2024-11-19 10:59:40.052259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.906 [2024-11-19 10:59:40.052277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.906 [2024-11-19 10:59:40.052285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.906 [2024-11-19 10:59:40.057257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.906 [2024-11-19 10:59:40.057274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.906 [2024-11-19 10:59:40.057281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.906 [2024-11-19 10:59:40.059949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.906 [2024-11-19 10:59:40.059966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.906 [2024-11-19 10:59:40.059973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.906 [2024-11-19 10:59:40.067817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.906 [2024-11-19 10:59:40.067834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.906 [2024-11-19 10:59:40.067840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.906 [2024-11-19 10:59:40.078200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.906 [2024-11-19 10:59:40.078217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.906 [2024-11-19 10:59:40.078223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.906 [2024-11-19 10:59:40.084371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.906 [2024-11-19 10:59:40.084388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.906 [2024-11-19 10:59:40.084394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.906 [2024-11-19 10:59:40.094367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:00.906 [2024-11-19 10:59:40.094384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.906 [2024-11-19 10:59:40.094391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.167 [2024-11-19 10:59:40.103672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:01.167 [2024-11-19 10:59:40.103689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.167 [2024-11-19 10:59:40.103695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:01.167 [2024-11-19 10:59:40.110999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:01.167 [2024-11-19 10:59:40.111016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.167 [2024-11-19 10:59:40.111022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.167 [2024-11-19 10:59:40.115704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:01.167 [2024-11-19 10:59:40.115721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.167 [2024-11-19 10:59:40.115728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:01.167 [2024-11-19 10:59:40.124679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:01.167 [2024-11-19 10:59:40.124696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.167 [2024-11-19 10:59:40.124706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.167 [2024-11-19 10:59:40.129138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:01.167 [2024-11-19 10:59:40.129155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.167 [2024-11-19 10:59:40.129166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:01.167 [2024-11-19 10:59:40.133659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:01.167 [2024-11-19 10:59:40.133676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.167 [2024-11-19 10:59:40.133682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.167 [2024-11-19 10:59:40.143767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:01.167 [2024-11-19 10:59:40.143784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.167 [2024-11-19 10:59:40.143790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:01.167 [2024-11-19 10:59:40.152454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:01.167 [2024-11-19 10:59:40.152471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.167 [2024-11-19 10:59:40.152477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.167 [2024-11-19 10:59:40.162445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:01.167 [2024-11-19 10:59:40.162462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.167 [2024-11-19 10:59:40.162468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:01.167 [2024-11-19 10:59:40.170074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:01.167 [2024-11-19 10:59:40.170091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.168 [2024-11-19 10:59:40.170098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.168 [2024-11-19 10:59:40.177286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:01.168 [2024-11-19 10:59:40.177303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.168 [2024-11-19 10:59:40.177309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:01.168 [2024-11-19 10:59:40.184522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:01.168 [2024-11-19 10:59:40.184539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.168 [2024-11-19 10:59:40.184545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.168 [2024-11-19 10:59:40.194688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:01.168 [2024-11-19 10:59:40.194705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.168 [2024-11-19 10:59:40.194711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:01.168 [2024-11-19 10:59:40.204871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2287a10) 00:32:01.168 [2024-11-19 10:59:40.204888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.168 [2024-11-19 10:59:40.204894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.168 3580.50 IOPS, 447.56 MiB/s 00:32:01.168 Latency(us) 00:32:01.168 [2024-11-19T09:59:40.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:01.168 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:01.168 nvme0n1 : 2.01 3577.73 447.22 0.00 0.00 4469.37 529.07 18350.08 00:32:01.168 [2024-11-19T09:59:40.363Z] =================================================================================================================== 00:32:01.168 [2024-11-19T09:59:40.363Z] Total : 3577.73 447.22 0.00 0.00 4469.37 529.07 18350.08 00:32:01.168 { 00:32:01.168 "results": [ 00:32:01.168 { 00:32:01.168 "job": "nvme0n1", 00:32:01.168 "core_mask": "0x2", 00:32:01.168 "workload": "randread", 00:32:01.168 "status": "finished", 00:32:01.168 "queue_depth": 16, 00:32:01.168 "io_size": 131072, 00:32:01.168 "runtime": 2.006018, 00:32:01.168 "iops": 3577.7345965988343, 00:32:01.168 "mibps": 447.2168245748543, 00:32:01.168 "io_failed": 0, 00:32:01.168 "io_timeout": 0, 00:32:01.168 "avg_latency_us": 4469.370013468952, 00:32:01.168 "min_latency_us": 529.0666666666667, 00:32:01.168 "max_latency_us": 18350.08 00:32:01.168 } 00:32:01.168 ], 00:32:01.168 "core_count": 1 00:32:01.168 } 00:32:01.168 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:01.168 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:01.168 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:01.168 | .driver_specific 00:32:01.168 | .nvme_error 00:32:01.168 | .status_code 00:32:01.168 | .command_transient_transport_error' 00:32:01.168 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 232 > 0 )) 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1199071 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1199071 ']' 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1199071 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1199071 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1199071' 00:32:01.429 killing process with pid 1199071 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1199071 00:32:01.429 Received shutdown signal, test time was about 2.000000 seconds 00:32:01.429 00:32:01.429 Latency(us) 00:32:01.429 [2024-11-19T09:59:40.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:01.429 [2024-11-19T09:59:40.624Z] =================================================================================================================== 00:32:01.429 [2024-11-19T09:59:40.624Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1199071 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1199753 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1199753 /var/tmp/bperf.sock 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1199753 ']' 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:01.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:01.429 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:01.691 [2024-11-19 10:59:40.657490] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:32:01.691 [2024-11-19 10:59:40.657547] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199753 ] 00:32:01.691 [2024-11-19 10:59:40.740322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.691 [2024-11-19 10:59:40.768750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:02.262 10:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:02.262 10:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:32:02.262 10:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:02.262 10:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:02.523 10:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:02.523 10:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.523 10:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:02.523 10:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.523 10:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:02.523 10:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:02.783 nvme0n1 00:32:02.783 10:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:02.783 10:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.783 10:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:02.783 10:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.783 10:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:02.783 10:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:03.044 Running I/O for 2 seconds... 00:32:03.044 [2024-11-19 10:59:42.030617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e0630 00:32:03.044 [2024-11-19 10:59:42.031655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.044 [2024-11-19 10:59:42.031684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:03.044 [2024-11-19 10:59:42.039345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e1710 00:32:03.044 [2024-11-19 10:59:42.040403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.044 [2024-11-19 10:59:42.040420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.044 [2024-11-19 10:59:42.047888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e27f0 00:32:03.044 [2024-11-19 10:59:42.048954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.044 [2024-11-19 10:59:42.048971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.044 [2024-11-19 10:59:42.056405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e38d0 00:32:03.044 [2024-11-19 10:59:42.057441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.044 [2024-11-19 10:59:42.057458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.044 [2024-11-19 10:59:42.064908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e49b0 00:32:03.044 [2024-11-19 10:59:42.065986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.044 [2024-11-19 10:59:42.066002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.044 [2024-11-19 10:59:42.073407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e5a90 00:32:03.044 [2024-11-19 10:59:42.074468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.044 [2024-11-19 10:59:42.074485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.044 [2024-11-19 10:59:42.081876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ec840 00:32:03.044 [2024-11-19 10:59:42.082952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.044 [2024-11-19 10:59:42.082969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.044 [2024-11-19 10:59:42.090354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ed920 00:32:03.044 [2024-11-19 10:59:42.091424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.044 [2024-11-19 10:59:42.091440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.044 [2024-11-19 10:59:42.098838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eea00 00:32:03.044 [2024-11-19 10:59:42.099899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.044 [2024-11-19 10:59:42.099915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.044 [2024-11-19 10:59:42.107301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166efae0 00:32:03.044 [2024-11-19 10:59:42.108374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.044 [2024-11-19 10:59:42.108390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.044 [2024-11-19 10:59:42.115751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f0bc0 00:32:03.044 [2024-11-19 10:59:42.116799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.044 [2024-11-19 10:59:42.116815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.044 [2024-11-19 10:59:42.124182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f1ca0 00:32:03.044 [2024-11-19 10:59:42.125242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.044 [2024-11-19 10:59:42.125258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.044 [2024-11-19 10:59:42.132621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f2d80 00:32:03.044 [2024-11-19 10:59:42.133698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.045 [2024-11-19 10:59:42.133714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.045 [2024-11-19 10:59:42.141076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f3e60 00:32:03.045 [2024-11-19 10:59:42.142131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.045 [2024-11-19 10:59:42.142147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.045 [2024-11-19 10:59:42.149516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f4f40 00:32:03.045 [2024-11-19 10:59:42.150594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.045 [2024-11-19 10:59:42.150612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.045 [2024-11-19 10:59:42.157958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e73e0 00:32:03.045 [2024-11-19 10:59:42.159023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.045 [2024-11-19 10:59:42.159039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.045 [2024-11-19 10:59:42.166410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e01f8 00:32:03.045 [2024-11-19 10:59:42.167425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.045 [2024-11-19 10:59:42.167441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.045 [2024-11-19 10:59:42.174833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e12d8 00:32:03.045 [2024-11-19 10:59:42.175909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.045 [2024-11-19 10:59:42.175925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.045 [2024-11-19 10:59:42.183260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e23b8 00:32:03.045 [2024-11-19 10:59:42.184326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.045 [2024-11-19 10:59:42.184341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.045 [2024-11-19 10:59:42.191706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e3498 00:32:03.045 [2024-11-19 10:59:42.192762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.045 [2024-11-19 10:59:42.192778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.045 [2024-11-19 10:59:42.200165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e4578 00:32:03.045 [2024-11-19 10:59:42.201216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.045 [2024-11-19 10:59:42.201232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.045 [2024-11-19 10:59:42.208609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e5658 00:32:03.045 [2024-11-19 10:59:42.209620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.045 [2024-11-19 10:59:42.209636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.045 [2024-11-19 10:59:42.217041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e6738 00:32:03.045 [2024-11-19 10:59:42.218098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.045 [2024-11-19 10:59:42.218113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.045 [2024-11-19 10:59:42.225462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ed4e8 00:32:03.045 [2024-11-19 10:59:42.226518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.045 [2024-11-19 10:59:42.226534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.045 [2024-11-19 10:59:42.233918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ee5c8 00:32:03.045 [2024-11-19 10:59:42.234990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.045 [2024-11-19 10:59:42.235005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.306 [2024-11-19 10:59:42.242374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ef6a8 00:32:03.306 [2024-11-19 10:59:42.243425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.306 [2024-11-19 10:59:42.243441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.306 [2024-11-19 10:59:42.250820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f0788 00:32:03.306 [2024-11-19 10:59:42.251883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.306 [2024-11-19 10:59:42.251899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.306 [2024-11-19 10:59:42.259251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f1868 00:32:03.306 [2024-11-19 10:59:42.260315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.306 [2024-11-19 10:59:42.260331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.306 [2024-11-19 10:59:42.267675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f2948 00:32:03.306 [2024-11-19 10:59:42.268741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.306 [2024-11-19 10:59:42.268757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.306 [2024-11-19 10:59:42.276103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f3a28 00:32:03.306 [2024-11-19 10:59:42.277153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.306 [2024-11-19 10:59:42.277171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.306 [2024-11-19 10:59:42.284546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f4b08 00:32:03.306 [2024-11-19 10:59:42.285617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.306 [2024-11-19 10:59:42.285633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.306 [2024-11-19 10:59:42.292977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e8088 00:32:03.306 [2024-11-19 10:59:42.294025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.306 [2024-11-19 10:59:42.294041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.306 [2024-11-19 10:59:42.301483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e6fa8 00:32:03.306 [2024-11-19 10:59:42.302533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.306 [2024-11-19 10:59:42.302549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.306 [2024-11-19 10:59:42.309912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e0630 00:32:03.307 [2024-11-19 10:59:42.310972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.310987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.318344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e1710 00:32:03.307 [2024-11-19 10:59:42.319359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.319375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.326787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e27f0 00:32:03.307 [2024-11-19 10:59:42.327858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.327874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.335245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e38d0 00:32:03.307 [2024-11-19 10:59:42.336301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.336317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.343687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e49b0 00:32:03.307 [2024-11-19 10:59:42.344623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.344638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.352120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e5a90 00:32:03.307 [2024-11-19 10:59:42.353178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.353193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.360546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ec840 00:32:03.307 [2024-11-19 10:59:42.361613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.361629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.368977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ed920 00:32:03.307 [2024-11-19 10:59:42.369988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.370007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.377421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eea00 00:32:03.307 [2024-11-19 10:59:42.378488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.378504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.385872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166efae0 00:32:03.307 [2024-11-19 10:59:42.386930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.386947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.394325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f0bc0 00:32:03.307 [2024-11-19 10:59:42.395372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.395388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.402761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f1ca0 00:32:03.307 [2024-11-19 10:59:42.403772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.403788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.411202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f2d80 00:32:03.307 [2024-11-19 10:59:42.412228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.412244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.419682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f3e60 00:32:03.307 [2024-11-19 10:59:42.420735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.420751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.428152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f4f40 00:32:03.307 [2024-11-19 10:59:42.429227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.429243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.436622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e73e0 00:32:03.307 [2024-11-19 10:59:42.437687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.437703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.445171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e01f8 00:32:03.307 [2024-11-19 10:59:42.446184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.446200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.453618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e12d8 00:32:03.307 [2024-11-19 10:59:42.454650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.454666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.462066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e23b8 00:32:03.307 [2024-11-19 10:59:42.463123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.463138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.470531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e3498 00:32:03.307 [2024-11-19 10:59:42.471576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.471592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.478988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e4578 00:32:03.307 [2024-11-19 10:59:42.480040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.480056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.487439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e5658 00:32:03.307 [2024-11-19 10:59:42.488510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.488526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.307 [2024-11-19 10:59:42.495863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e6738 00:32:03.307 [2024-11-19 10:59:42.496934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.307 [2024-11-19 10:59:42.496950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.569 [2024-11-19 10:59:42.504292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ed4e8 00:32:03.569 [2024-11-19 10:59:42.505354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.569 [2024-11-19 10:59:42.505370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.569 [2024-11-19 10:59:42.512745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ee5c8 00:32:03.569 [2024-11-19 10:59:42.513804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.569 [2024-11-19 10:59:42.513820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.569 [2024-11-19 10:59:42.521198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ef6a8 00:32:03.569 [2024-11-19 10:59:42.522235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.569 [2024-11-19 10:59:42.522251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.569 [2024-11-19 10:59:42.529645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f0788 00:32:03.569 [2024-11-19 10:59:42.530712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.569 [2024-11-19 10:59:42.530727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.569 [2024-11-19 10:59:42.538088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f1868 00:32:03.569 [2024-11-19 10:59:42.539156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.569 [2024-11-19 10:59:42.539175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.569 [2024-11-19 10:59:42.546520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f2948 00:32:03.569 [2024-11-19 10:59:42.547574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.569 [2024-11-19 10:59:42.547590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.569 [2024-11-19 10:59:42.554956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f3a28 00:32:03.569 [2024-11-19 10:59:42.556025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.569 [2024-11-19 10:59:42.556041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.569 [2024-11-19 10:59:42.563414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f4b08 00:32:03.569 [2024-11-19 10:59:42.564484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.569 [2024-11-19 10:59:42.564500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.569 [2024-11-19 10:59:42.571856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e8088 00:32:03.569 [2024-11-19 10:59:42.572917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.569 [2024-11-19 10:59:42.572932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.569 [2024-11-19 10:59:42.580311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e6fa8 00:32:03.569 [2024-11-19 10:59:42.581360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.569 [2024-11-19 10:59:42.581375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.588740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e0630 00:32:03.570 [2024-11-19 10:59:42.589812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.589830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.597170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e1710 00:32:03.570 [2024-11-19 10:59:42.598213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.598229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.605624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e27f0 00:32:03.570 [2024-11-19 10:59:42.606675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.606692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.614064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e38d0 00:32:03.570 [2024-11-19 10:59:42.615122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.615139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.622810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f7100 00:32:03.570 [2024-11-19 10:59:42.623962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.623978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.629742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fc560 00:32:03.570 [2024-11-19 10:59:42.630428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.630443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.638178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166de470 00:32:03.570 [2024-11-19 10:59:42.638876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.638892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.646607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166df550 00:32:03.570 [2024-11-19 10:59:42.647307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.647323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.655052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fd640 00:32:03.570 [2024-11-19 10:59:42.655715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.655731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.663501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fdeb0 00:32:03.570 [2024-11-19 10:59:42.664202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.664218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.671939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ebb98 00:32:03.570 [2024-11-19 10:59:42.672633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.672649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.680364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eaab8 00:32:03.570 [2024-11-19 10:59:42.681077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.681093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.688871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e99d8 00:32:03.570 [2024-11-19 10:59:42.689568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.689585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.697337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e88f8 00:32:03.570 [2024-11-19 10:59:42.698038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.698053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.705798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e1f80 00:32:03.570 [2024-11-19 10:59:42.706515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.706531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.714264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e3060 00:32:03.570 [2024-11-19 10:59:42.714969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.714984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.722711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e4140 00:32:03.570 [2024-11-19 10:59:42.723419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.723434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.731136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e5220 00:32:03.570 [2024-11-19 10:59:42.731838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.731854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.739589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e6300 00:32:03.570 [2024-11-19 10:59:42.740293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.740309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.748036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f9f68 00:32:03.570 [2024-11-19 10:59:42.748757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.748772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.570 [2024-11-19 10:59:42.756484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fb048 00:32:03.570 [2024-11-19 10:59:42.757156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.570 [2024-11-19 10:59:42.757175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.832 [2024-11-19 10:59:42.764922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fc128 00:32:03.832 [2024-11-19 10:59:42.765638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.832 [2024-11-19 10:59:42.765654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.832 [2024-11-19 10:59:42.773353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166de038 00:32:03.832 [2024-11-19 10:59:42.774069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.832 [2024-11-19 10:59:42.774084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.832 [2024-11-19 10:59:42.781775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166df118 00:32:03.832 [2024-11-19 10:59:42.782480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.832 [2024-11-19 10:59:42.782495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.832 [2024-11-19 10:59:42.790225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fd208 00:32:03.832 [2024-11-19 10:59:42.790921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.832 [2024-11-19 10:59:42.790937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.832 [2024-11-19 10:59:42.798673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fe2e8 00:32:03.832 [2024-11-19 10:59:42.799376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.832 [2024-11-19 10:59:42.799392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.832 [2024-11-19 10:59:42.807124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166feb58 00:32:03.832 [2024-11-19 10:59:42.807839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.832 [2024-11-19 10:59:42.807857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.832 [2024-11-19 10:59:42.815549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eaef0 00:32:03.832 [2024-11-19 10:59:42.816228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.832 [2024-11-19 10:59:42.816243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.832 [2024-11-19 10:59:42.823975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e9e10 00:32:03.832 [2024-11-19 10:59:42.824677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.832 [2024-11-19 10:59:42.824692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.832 [2024-11-19 10:59:42.832419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e8d30 00:32:03.832 [2024-11-19 10:59:42.833112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.832 [2024-11-19 10:59:42.833127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.832 [2024-11-19 10:59:42.840872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e1b48 00:32:03.832 [2024-11-19 10:59:42.841588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.832 [2024-11-19 10:59:42.841603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.832 [2024-11-19 10:59:42.850429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e2c28 00:32:03.832 [2024-11-19 10:59:42.851602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.832 [2024-11-19 10:59:42.851618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.832 [2024-11-19 10:59:42.859707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166f6020 00:32:03.832 [2024-11-19 10:59:42.860868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.832 [2024-11-19 10:59:42.860885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:03.832 [2024-11-19 10:59:42.867512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fac10 00:32:03.832 [2024-11-19 10:59:42.868530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.832 [2024-11-19 10:59:42.868546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:03.832 [2024-11-19 10:59:42.875243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fd640 00:32:03.832 [2024-11-19 10:59:42.876063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.832 [2024-11-19 10:59:42.876079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:03.832 [2024-11-19 10:59:42.884050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e4de8 00:32:03.832 [2024-11-19 10:59:42.884851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.832 [2024-11-19 10:59:42.884870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:03.832 [2024-11-19 10:59:42.892557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ed4e8 00:32:03.832 [2024-11-19 10:59:42.893366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.833 [2024-11-19 10:59:42.893382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:03.833 [2024-11-19 10:59:42.901031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ef6a8 00:32:03.833 [2024-11-19 10:59:42.901833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.833 [2024-11-19 10:59:42.901849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:03.833 [2024-11-19 10:59:42.909488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fcdd0 00:32:03.833 [2024-11-19 10:59:42.910271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.833 [2024-11-19 10:59:42.910287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:03.833 [2024-11-19 10:59:42.917980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e4578 00:32:03.833 [2024-11-19 10:59:42.918777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.833 [2024-11-19 10:59:42.918793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:03.833 [2024-11-19 10:59:42.926481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fac10 00:32:03.833 [2024-11-19 10:59:42.927237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.833 [2024-11-19 10:59:42.927253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:03.833 [2024-11-19 10:59:42.934987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fb480 00:32:03.833 [2024-11-19 10:59:42.935787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.833 [2024-11-19 10:59:42.935802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:03.833 [2024-11-19 10:59:42.943469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fd640 00:32:03.833 [2024-11-19 10:59:42.944235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.833 [2024-11-19 10:59:42.944250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:03.833 [2024-11-19 10:59:42.951960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e4de8 00:32:03.833 [2024-11-19 10:59:42.952755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.833 [2024-11-19 10:59:42.952772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:03.833 [2024-11-19 10:59:42.960455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ed4e8 00:32:03.833 [2024-11-19 10:59:42.961238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.833 [2024-11-19 10:59:42.961253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:03.833 [2024-11-19 10:59:42.968921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ef6a8 00:32:03.833 [2024-11-19 10:59:42.969708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.833 [2024-11-19 10:59:42.969724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:03.833 [2024-11-19 10:59:42.977430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fcdd0 00:32:03.833 [2024-11-19 10:59:42.978225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.833 [2024-11-19 10:59:42.978242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:03.833 [2024-11-19 10:59:42.985887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e4578 00:32:03.833 [2024-11-19 10:59:42.986696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.833 [2024-11-19 10:59:42.986711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:03.833 [2024-11-19 10:59:42.994394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fac10 00:32:03.833 [2024-11-19 10:59:42.995194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.833 [2024-11-19 10:59:42.995209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:03.833 [2024-11-19 10:59:43.003012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fb480 00:32:03.833 [2024-11-19 10:59:43.003787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.833 [2024-11-19 10:59:43.003803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:03.833 [2024-11-19 10:59:43.011469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fd640 00:32:03.833 [2024-11-19 10:59:43.012240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.833 [2024-11-19 10:59:43.012256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:03.833 29909.00 IOPS, 116.83 MiB/s [2024-11-19T09:59:43.028Z] [2024-11-19 10:59:43.019956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e4578 00:32:03.833 [2024-11-19 10:59:43.020743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.833 [2024-11-19 10:59:43.020759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.028505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dece0 00:32:04.095 [2024-11-19 10:59:43.029287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.029306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.036957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eee38 00:32:04.095 [2024-11-19 10:59:43.037765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.037782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.045407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e5658 00:32:04.095 [2024-11-19 10:59:43.046199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.046215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.053840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eea00 00:32:04.095 [2024-11-19 10:59:43.054652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.054668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.062291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e12d8 00:32:04.095 [2024-11-19 10:59:43.063095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.063111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.070742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fb8b8 00:32:04.095 [2024-11-19 10:59:43.071550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.071566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.079210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166df988 00:32:04.095 [2024-11-19 10:59:43.080002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.080018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.087667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e27f0 00:32:04.095 [2024-11-19 10:59:43.088420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.088436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.096089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ddc00 00:32:04.095 [2024-11-19 10:59:43.096896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.096911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.104518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dfdc0 00:32:04.095 [2024-11-19 10:59:43.105322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.105341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.112974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e4578 00:32:04.095 [2024-11-19 10:59:43.113781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.113797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.121410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dece0 00:32:04.095 [2024-11-19 10:59:43.122200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.122215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.129837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eee38 00:32:04.095 [2024-11-19 10:59:43.130609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.130624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.138297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e5658 00:32:04.095 [2024-11-19 10:59:43.139091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.139107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.146717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eea00 00:32:04.095 [2024-11-19 10:59:43.147517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.147533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.155153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e12d8 00:32:04.095 [2024-11-19 10:59:43.155946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.155961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.163592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fb8b8 00:32:04.095 [2024-11-19 10:59:43.164377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.164393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.172033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166df988 00:32:04.095 [2024-11-19 10:59:43.172789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.172805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.180476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e27f0 00:32:04.095 [2024-11-19 10:59:43.181238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.181254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.095 [2024-11-19 10:59:43.188921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ddc00 00:32:04.095 [2024-11-19 10:59:43.189726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.095 [2024-11-19 10:59:43.189742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.096 [2024-11-19 10:59:43.197351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dfdc0 00:32:04.096 [2024-11-19 10:59:43.198132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.096 [2024-11-19 10:59:43.198148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.096 [2024-11-19 10:59:43.205799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e4578 00:32:04.096 [2024-11-19 10:59:43.206572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.096 [2024-11-19 10:59:43.206588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.096 [2024-11-19 10:59:43.214254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dece0 00:32:04.096 [2024-11-19 10:59:43.215045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.096 [2024-11-19 10:59:43.215060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.096 [2024-11-19 10:59:43.222703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eee38 00:32:04.096 [2024-11-19 10:59:43.223500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.096 [2024-11-19 10:59:43.223516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.096 [2024-11-19 10:59:43.231121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e5658 00:32:04.096 [2024-11-19 10:59:43.231919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.096 [2024-11-19 10:59:43.231935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.096 [2024-11-19 10:59:43.239551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eea00 00:32:04.096 [2024-11-19 10:59:43.240366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.096 [2024-11-19 10:59:43.240382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.096 [2024-11-19 10:59:43.247990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e12d8 00:32:04.096 [2024-11-19 10:59:43.248779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.096 [2024-11-19 10:59:43.248794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.096 [2024-11-19 10:59:43.256423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fb8b8 00:32:04.096 [2024-11-19 10:59:43.257187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.096 [2024-11-19 10:59:43.257203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.096 [2024-11-19 10:59:43.264859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166df988 00:32:04.096 [2024-11-19 10:59:43.265663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.096 [2024-11-19 10:59:43.265679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.096 [2024-11-19 10:59:43.273290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e27f0 00:32:04.096 [2024-11-19 10:59:43.274096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.096 [2024-11-19 10:59:43.274111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.096 [2024-11-19 10:59:43.281715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ddc00 00:32:04.096 [2024-11-19 10:59:43.282512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.096 [2024-11-19 10:59:43.282528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.357 [2024-11-19 10:59:43.290140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dfdc0 00:32:04.357 [2024-11-19 10:59:43.290949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.357 [2024-11-19 10:59:43.290965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.357 [2024-11-19 10:59:43.298588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e4578 00:32:04.357 [2024-11-19 10:59:43.299391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.357 [2024-11-19 10:59:43.299406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.357 [2024-11-19 10:59:43.307044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dece0 00:32:04.357 [2024-11-19 10:59:43.307848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.357 [2024-11-19 10:59:43.307864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.357 [2024-11-19 10:59:43.315481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eee38 00:32:04.357 [2024-11-19 10:59:43.316278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.357 [2024-11-19 10:59:43.316293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.323906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e5658 00:32:04.358 [2024-11-19 10:59:43.324709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.324728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.332326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eea00 00:32:04.358 [2024-11-19 10:59:43.333128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.333143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.340777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e12d8 00:32:04.358 [2024-11-19 10:59:43.341582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.341597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.349213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fb8b8 00:32:04.358 [2024-11-19 10:59:43.350017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.350033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.357646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166df988 00:32:04.358 [2024-11-19 10:59:43.358455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.358471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.366073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e27f0 00:32:04.358 [2024-11-19 10:59:43.366882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.366898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.374489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ddc00 00:32:04.358 [2024-11-19 10:59:43.375292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.375307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.382912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dfdc0 00:32:04.358 [2024-11-19 10:59:43.383713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.383728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.391356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e4578 00:32:04.358 [2024-11-19 10:59:43.392162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.392177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.399802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dece0 00:32:04.358 [2024-11-19 10:59:43.400612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.400628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.408266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eee38 00:32:04.358 [2024-11-19 10:59:43.409072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.409087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.416687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e5658 00:32:04.358 [2024-11-19 10:59:43.417495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.417510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.425109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eea00 00:32:04.358 [2024-11-19 10:59:43.425915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.425930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.433575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e12d8 00:32:04.358 [2024-11-19 10:59:43.434378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.434394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.442007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fb8b8 00:32:04.358 [2024-11-19 10:59:43.442800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.442816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.450448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166df988 00:32:04.358 [2024-11-19 10:59:43.451235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.451250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.458892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e27f0 00:32:04.358 [2024-11-19 10:59:43.459665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.459680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.467313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ddc00 00:32:04.358 [2024-11-19 10:59:43.468119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.468135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.475742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dfdc0 00:32:04.358 [2024-11-19 10:59:43.476548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.476563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.484201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e4578 00:32:04.358 [2024-11-19 10:59:43.484950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.484965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.492629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dece0 00:32:04.358 [2024-11-19 10:59:43.493416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.493432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.501084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eee38 00:32:04.358 [2024-11-19 10:59:43.501881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.501896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.509497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e5658 00:32:04.358 [2024-11-19 10:59:43.510291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.510307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.517920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eea00 00:32:04.358 [2024-11-19 10:59:43.518718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.518734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.526369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e12d8 00:32:04.358 [2024-11-19 10:59:43.527179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.527195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.534827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fb8b8 00:32:04.358 [2024-11-19 10:59:43.535637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.358 [2024-11-19 10:59:43.535652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.358 [2024-11-19 10:59:43.543260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166df988 00:32:04.358 [2024-11-19 10:59:43.544062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.359 [2024-11-19 10:59:43.544080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.551699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e27f0 00:32:04.620 [2024-11-19 10:59:43.552500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.552516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.560125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ddc00 00:32:04.620 [2024-11-19 10:59:43.560935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.560950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.568576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dfdc0 00:32:04.620 [2024-11-19 10:59:43.569414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.569430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.577033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e4578 00:32:04.620 [2024-11-19 10:59:43.577844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.577860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.585480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dece0 00:32:04.620 [2024-11-19 10:59:43.586269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.586285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.593925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eee38 00:32:04.620 [2024-11-19 10:59:43.594732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.594747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.602367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e5658 00:32:04.620 [2024-11-19 10:59:43.603168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.603183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.610800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eea00 00:32:04.620 [2024-11-19 10:59:43.611605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.611621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.619264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e12d8 00:32:04.620 [2024-11-19 10:59:43.620070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.620086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.627704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fb8b8 00:32:04.620 [2024-11-19 10:59:43.628470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.628486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.636153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166df988 00:32:04.620 [2024-11-19 10:59:43.636960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.636976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.644659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e27f0 00:32:04.620 [2024-11-19 10:59:43.645450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.645466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.653095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ddc00 00:32:04.620 [2024-11-19 10:59:43.653889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.653905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.661553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dfdc0 00:32:04.620 [2024-11-19 10:59:43.662241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.662257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.669997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e4578 00:32:04.620 [2024-11-19 10:59:43.670812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.670827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.678435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dece0 00:32:04.620 [2024-11-19 10:59:43.679235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.679251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.686871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eee38 00:32:04.620 [2024-11-19 10:59:43.687675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.687691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.695313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e5658 00:32:04.620 [2024-11-19 10:59:43.696119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.696134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.703746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eea00 00:32:04.620 [2024-11-19 10:59:43.704559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.704574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.712195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e12d8 00:32:04.620 [2024-11-19 10:59:43.712977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.712993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.720641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fb8b8 00:32:04.620 [2024-11-19 10:59:43.721445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.620 [2024-11-19 10:59:43.721461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.620 [2024-11-19 10:59:43.729079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166df988 00:32:04.620 [2024-11-19 10:59:43.729836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.621 [2024-11-19 10:59:43.729851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.621 [2024-11-19 10:59:43.737524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e27f0 00:32:04.621 [2024-11-19 10:59:43.738328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.621 [2024-11-19 10:59:43.738344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.621 [2024-11-19 10:59:43.745946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ddc00 00:32:04.621 [2024-11-19 10:59:43.746752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.621 [2024-11-19 10:59:43.746768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.621 [2024-11-19 10:59:43.754402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dfdc0 00:32:04.621 [2024-11-19 10:59:43.755169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.621 [2024-11-19 10:59:43.755184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.621 [2024-11-19 10:59:43.762845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e4578 00:32:04.621 [2024-11-19 10:59:43.763621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.621 [2024-11-19 10:59:43.763640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.621 [2024-11-19 10:59:43.771311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dece0 00:32:04.621 [2024-11-19 10:59:43.772113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.621 [2024-11-19 10:59:43.772128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.621 [2024-11-19 10:59:43.779754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eee38 00:32:04.621 [2024-11-19 10:59:43.780524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.621 [2024-11-19 10:59:43.780540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.621 [2024-11-19 10:59:43.788197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e5658 00:32:04.621 [2024-11-19 10:59:43.788995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.621 [2024-11-19 10:59:43.789011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.621 [2024-11-19 10:59:43.796648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eea00 00:32:04.621 [2024-11-19 10:59:43.797451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.621 [2024-11-19 10:59:43.797467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.621 [2024-11-19 10:59:43.805128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e12d8 00:32:04.621 [2024-11-19 10:59:43.805920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.621 [2024-11-19 10:59:43.805936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.621 [2024-11-19 10:59:43.813580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fb8b8 00:32:04.882 [2024-11-19 10:59:43.814397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.882 [2024-11-19 10:59:43.814413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.882 [2024-11-19 10:59:43.822032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166df988 00:32:04.882 [2024-11-19 10:59:43.822843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.882 [2024-11-19 10:59:43.822859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.882 [2024-11-19 10:59:43.830471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e27f0 00:32:04.882 [2024-11-19 10:59:43.831238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.882 [2024-11-19 10:59:43.831254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.882 [2024-11-19 10:59:43.838914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ddc00 00:32:04.882 [2024-11-19 10:59:43.839722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.882 [2024-11-19 10:59:43.839737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.882 [2024-11-19 10:59:43.847368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dfdc0 00:32:04.882 [2024-11-19 10:59:43.848171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:43.848186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:43.855828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e4578 00:32:04.883 [2024-11-19 10:59:43.856636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:43.856652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:43.864291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dece0 00:32:04.883 [2024-11-19 10:59:43.865096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:43.865112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:43.872734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eee38 00:32:04.883 [2024-11-19 10:59:43.873542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:43.873558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:43.881162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e5658 00:32:04.883 [2024-11-19 10:59:43.881968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:43.881984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:43.889600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eea00 00:32:04.883 [2024-11-19 10:59:43.890391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:43.890407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:43.898044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e12d8 00:32:04.883 [2024-11-19 10:59:43.898839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:43.898855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:43.906496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fb8b8 00:32:04.883 [2024-11-19 10:59:43.907261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:43.907276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:43.914946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166df988 00:32:04.883 [2024-11-19 10:59:43.915743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:43.915759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:43.923382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e27f0 00:32:04.883 [2024-11-19 10:59:43.924185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:43.924201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:43.931801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166ddc00 00:32:04.883 [2024-11-19 10:59:43.932605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:43.932620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:43.940278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dfdc0 00:32:04.883 [2024-11-19 10:59:43.941083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:43.941099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:43.948725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e4578 00:32:04.883 [2024-11-19 10:59:43.949534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:43.949550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:43.957168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166dece0 00:32:04.883 [2024-11-19 10:59:43.957966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:43.957982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:43.965586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eee38 00:32:04.883 [2024-11-19 10:59:43.966362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:43.966378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:43.974008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e5658 00:32:04.883 [2024-11-19 10:59:43.974817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:43.974833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:43.982463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166eea00 00:32:04.883 [2024-11-19 10:59:43.983226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:43.983244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:43.990921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e12d8 00:32:04.883 [2024-11-19 10:59:43.991672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:43.991687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:43.999494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166fb8b8 00:32:04.883 [2024-11-19 10:59:44.000306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:44.000322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:44.007938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166df988 00:32:04.883 [2024-11-19 10:59:44.008701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:44.008717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 [2024-11-19 10:59:44.016379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116520) with pdu=0x2000166e27f0 00:32:04.883 [2024-11-19 10:59:44.017183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.883 [2024-11-19 10:59:44.017198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.883 30093.00 IOPS, 117.55 MiB/s 00:32:04.883 Latency(us) 00:32:04.883 [2024-11-19T09:59:44.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:04.883 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:04.883 nvme0n1 : 2.00 30091.54 117.55 0.00 0.00 4248.24 2157.23 15073.28 00:32:04.883 [2024-11-19T09:59:44.078Z] =================================================================================================================== 00:32:04.883 [2024-11-19T09:59:44.078Z] Total : 30091.54 117.55 0.00 0.00 4248.24 2157.23 15073.28 00:32:04.883 { 00:32:04.883 "results": [ 00:32:04.883 { 00:32:04.883 "job": "nvme0n1", 00:32:04.883 "core_mask": "0x2", 00:32:04.883 "workload": "randwrite", 00:32:04.883 "status": "finished", 00:32:04.883 "queue_depth": 128, 00:32:04.883 "io_size": 4096, 00:32:04.883 "runtime": 2.004351, 00:32:04.883 "iops": 30091.53586372846, 00:32:04.883 "mibps": 117.54506196768929, 00:32:04.883 "io_failed": 0, 00:32:04.883 "io_timeout": 0, 00:32:04.883 "avg_latency_us": 4248.238379149119, 00:32:04.883 "min_latency_us": 2157.2266666666665, 00:32:04.883 "max_latency_us": 15073.28 00:32:04.883 } 00:32:04.883 ], 00:32:04.883 "core_count": 1 00:32:04.883 } 00:32:04.883 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:04.883 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:04.883 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:04.883 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:04.883 | .driver_specific 00:32:04.883 | .nvme_error 00:32:04.883 | .status_code 00:32:04.883 | .command_transient_transport_error' 00:32:05.146 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:32:05.146 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1199753 00:32:05.146 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1199753 ']' 00:32:05.146 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1199753 00:32:05.146 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:05.146 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:05.146 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1199753 00:32:05.146 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:05.146 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:05.146 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1199753' 00:32:05.146 killing process with pid 1199753 00:32:05.146 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1199753 00:32:05.146 Received shutdown signal, test time was about 2.000000 seconds 00:32:05.146 00:32:05.146 Latency(us) 00:32:05.146 [2024-11-19T09:59:44.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.146 [2024-11-19T09:59:44.341Z] =================================================================================================================== 00:32:05.146 [2024-11-19T09:59:44.341Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:05.146 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1199753 00:32:05.409 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:05.409 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:05.409 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:05.409 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:05.409 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:05.409 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1200548 00:32:05.409 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1200548 /var/tmp/bperf.sock 00:32:05.409 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1200548 ']' 00:32:05.409 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:05.409 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:05.409 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:05.409 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:05.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:05.409 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:05.409 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:05.409 [2024-11-19 10:59:44.439987] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:32:05.409 [2024-11-19 10:59:44.440045] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1200548 ] 00:32:05.409 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:05.409 Zero copy mechanism will not be used. 00:32:05.409 [2024-11-19 10:59:44.524004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.409 [2024-11-19 10:59:44.553420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.373 10:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:06.373 10:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:32:06.373 10:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:06.373 10:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:06.373 10:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:06.373 10:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.373 10:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:06.373 10:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.373 10:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:06.373 10:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:06.632 nvme0n1 00:32:06.892 10:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:06.892 10:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.892 10:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:06.892 10:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.892 10:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:06.892 10:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:06.892 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:06.892 Zero copy mechanism will not be used. 00:32:06.892 Running I/O for 2 seconds... 00:32:06.892 [2024-11-19 10:59:45.935005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:06.892 [2024-11-19 10:59:45.935272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.892 [2024-11-19 10:59:45.935299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.892 [2024-11-19 10:59:45.942887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:06.892 [2024-11-19 10:59:45.943143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.892 [2024-11-19 10:59:45.943168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.892 [2024-11-19 10:59:45.951127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:06.892 [2024-11-19 10:59:45.951489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.892 [2024-11-19 10:59:45.951507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.892 [2024-11-19 10:59:45.958308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:06.892 [2024-11-19 10:59:45.958597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.892 [2024-11-19 10:59:45.958619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.892 [2024-11-19 10:59:45.967117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:06.892 [2024-11-19 10:59:45.967417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.892 [2024-11-19 10:59:45.967433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.892 [2024-11-19 10:59:45.975368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:06.892 [2024-11-19 10:59:45.975428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.892 [2024-11-19 10:59:45.975444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.892 [2024-11-19 10:59:45.983880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:06.892 [2024-11-19 10:59:45.984065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.892 [2024-11-19 10:59:45.984081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.892 [2024-11-19 10:59:45.992663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:06.892 [2024-11-19 10:59:45.992908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.892 [2024-11-19 10:59:45.992923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.892 [2024-11-19 10:59:46.002665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:06.892 [2024-11-19 10:59:46.002799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.892 [2024-11-19 10:59:46.002815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.892 [2024-11-19 10:59:46.010779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:06.892 [2024-11-19 10:59:46.011022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.892 [2024-11-19 10:59:46.011038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.892 [2024-11-19 10:59:46.021644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:06.892 [2024-11-19 10:59:46.021906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.892 [2024-11-19 10:59:46.021922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.892 [2024-11-19 10:59:46.033129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:06.892 [2024-11-19 10:59:46.033420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.892 [2024-11-19 10:59:46.033437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.892 [2024-11-19 10:59:46.044282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:06.892 [2024-11-19 10:59:46.044532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.892 [2024-11-19 10:59:46.044548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.892 [2024-11-19 10:59:46.055729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:06.892 [2024-11-19 10:59:46.055982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.892 [2024-11-19 10:59:46.055998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.892 [2024-11-19 10:59:46.067143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:06.892 [2024-11-19 10:59:46.067405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.892 [2024-11-19 10:59:46.067421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.892 [2024-11-19 10:59:46.077641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:06.892 [2024-11-19 10:59:46.077888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.892 [2024-11-19 10:59:46.077903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.152 [2024-11-19 10:59:46.089047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.152 [2024-11-19 10:59:46.089327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.152 [2024-11-19 10:59:46.089345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.152 [2024-11-19 10:59:46.100122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.152 [2024-11-19 10:59:46.100375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.152 [2024-11-19 10:59:46.100390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.152 [2024-11-19 10:59:46.110995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.152 [2024-11-19 10:59:46.111248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.152 [2024-11-19 10:59:46.111264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.152 [2024-11-19 10:59:46.121903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.152 [2024-11-19 10:59:46.122136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.152 [2024-11-19 10:59:46.122151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.152 [2024-11-19 10:59:46.132611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.152 [2024-11-19 10:59:46.132848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.152 [2024-11-19 10:59:46.132867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.152 [2024-11-19 10:59:46.143847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.152 [2024-11-19 10:59:46.144136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.152 [2024-11-19 10:59:46.144152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.152 [2024-11-19 10:59:46.155517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.152 [2024-11-19 10:59:46.155773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.152 [2024-11-19 10:59:46.155789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.152 [2024-11-19 10:59:46.166549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.152 [2024-11-19 10:59:46.166865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.152 [2024-11-19 10:59:46.166882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.152 [2024-11-19 10:59:46.177896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.152 [2024-11-19 10:59:46.178151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.152 [2024-11-19 10:59:46.178171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.152 [2024-11-19 10:59:46.188921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.152 [2024-11-19 10:59:46.189162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.152 [2024-11-19 10:59:46.189178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.152 [2024-11-19 10:59:46.200136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.152 [2024-11-19 10:59:46.200387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.152 [2024-11-19 10:59:46.200406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.152 [2024-11-19 10:59:46.211635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.152 [2024-11-19 10:59:46.211879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.152 [2024-11-19 10:59:46.211895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.152 [2024-11-19 10:59:46.222275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.152 [2024-11-19 10:59:46.222574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.152 [2024-11-19 10:59:46.222591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.152 [2024-11-19 10:59:46.231507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.153 [2024-11-19 10:59:46.231792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.153 [2024-11-19 10:59:46.231815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.153 [2024-11-19 10:59:46.242207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.153 [2024-11-19 10:59:46.242470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.153 [2024-11-19 10:59:46.242486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.153 [2024-11-19 10:59:46.252719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.153 [2024-11-19 10:59:46.252966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.153 [2024-11-19 10:59:46.252981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.153 [2024-11-19 10:59:46.263683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.153 [2024-11-19 10:59:46.263971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.153 [2024-11-19 10:59:46.263987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.153 [2024-11-19 10:59:46.273516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.153 [2024-11-19 10:59:46.273576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.153 [2024-11-19 10:59:46.273592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.153 [2024-11-19 10:59:46.281196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.153 [2024-11-19 10:59:46.281256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.153 [2024-11-19 10:59:46.281272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.153 [2024-11-19 10:59:46.288184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.153 [2024-11-19 10:59:46.288241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.153 [2024-11-19 10:59:46.288257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.153 [2024-11-19 10:59:46.297706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.153 [2024-11-19 10:59:46.297992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.153 [2024-11-19 10:59:46.298009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.153 [2024-11-19 10:59:46.307817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.153 [2024-11-19 10:59:46.308113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.153 [2024-11-19 10:59:46.308129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.153 [2024-11-19 10:59:46.315459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.153 [2024-11-19 10:59:46.315763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.153 [2024-11-19 10:59:46.315778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.153 [2024-11-19 10:59:46.325980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.153 [2024-11-19 10:59:46.326045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.153 [2024-11-19 10:59:46.326061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.153 [2024-11-19 10:59:46.334808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.153 [2024-11-19 10:59:46.335106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.153 [2024-11-19 10:59:46.335122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.153 [2024-11-19 10:59:46.342961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.153 [2024-11-19 10:59:46.343017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.153 [2024-11-19 10:59:46.343033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.413 [2024-11-19 10:59:46.351449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.413 [2024-11-19 10:59:46.351730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.413 [2024-11-19 10:59:46.351747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.413 [2024-11-19 10:59:46.360659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.413 [2024-11-19 10:59:46.360717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.413 [2024-11-19 10:59:46.360733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.413 [2024-11-19 10:59:46.370595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.413 [2024-11-19 10:59:46.370898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.413 [2024-11-19 10:59:46.370915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.413 [2024-11-19 10:59:46.377485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.413 [2024-11-19 10:59:46.377704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.413 [2024-11-19 10:59:46.377719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.413 [2024-11-19 10:59:46.386326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.413 [2024-11-19 10:59:46.386390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.413 [2024-11-19 10:59:46.386409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.413 [2024-11-19 10:59:46.394935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.413 [2024-11-19 10:59:46.395000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.413 [2024-11-19 10:59:46.395016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.413 [2024-11-19 10:59:46.403449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.413 [2024-11-19 10:59:46.403514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.413 [2024-11-19 10:59:46.403530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.413 [2024-11-19 10:59:46.413079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.413 [2024-11-19 10:59:46.413134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.413 [2024-11-19 10:59:46.413150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.413 [2024-11-19 10:59:46.421754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.413 [2024-11-19 10:59:46.421815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.413 [2024-11-19 10:59:46.421831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.413 [2024-11-19 10:59:46.430917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.413 [2024-11-19 10:59:46.430971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.413 [2024-11-19 10:59:46.430986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.413 [2024-11-19 10:59:46.439558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.413 [2024-11-19 10:59:46.439820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.413 [2024-11-19 10:59:46.439836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.413 [2024-11-19 10:59:46.448399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.413 [2024-11-19 10:59:46.448461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.413 [2024-11-19 10:59:46.448477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.414 [2024-11-19 10:59:46.456108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.414 [2024-11-19 10:59:46.456366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.414 [2024-11-19 10:59:46.456381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.414 [2024-11-19 10:59:46.465103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.414 [2024-11-19 10:59:46.465383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.414 [2024-11-19 10:59:46.465398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.414 [2024-11-19 10:59:46.474762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.414 [2024-11-19 10:59:46.475064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.414 [2024-11-19 10:59:46.475081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.414 [2024-11-19 10:59:46.484548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.414 [2024-11-19 10:59:46.484596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.414 [2024-11-19 10:59:46.484611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.414 [2024-11-19 10:59:46.494672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.414 [2024-11-19 10:59:46.494951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.414 [2024-11-19 10:59:46.494967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.414 [2024-11-19 10:59:46.505933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.414 [2024-11-19 10:59:46.506236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.414 [2024-11-19 10:59:46.506258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.414 [2024-11-19 10:59:46.516997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.414 [2024-11-19 10:59:46.517287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.414 [2024-11-19 10:59:46.517302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.414 [2024-11-19 10:59:46.528537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.414 [2024-11-19 10:59:46.528860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.414 [2024-11-19 10:59:46.528876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.414 [2024-11-19 10:59:46.539814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.414 [2024-11-19 10:59:46.539901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.414 [2024-11-19 10:59:46.539916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.414 [2024-11-19 10:59:46.550516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.414 [2024-11-19 10:59:46.550753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.414 [2024-11-19 10:59:46.550768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.414 [2024-11-19 10:59:46.561615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.414 [2024-11-19 10:59:46.561929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.414 [2024-11-19 10:59:46.561945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.414 [2024-11-19 10:59:46.572781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.414 [2024-11-19 10:59:46.573037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.414 [2024-11-19 10:59:46.573053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.414 [2024-11-19 10:59:46.584146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.414 [2024-11-19 10:59:46.584397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.414 [2024-11-19 10:59:46.584413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.414 [2024-11-19 10:59:46.595492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.414 [2024-11-19 10:59:46.595744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.414 [2024-11-19 10:59:46.595760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.414 [2024-11-19 10:59:46.607597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.674 [2024-11-19 10:59:46.607870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.674 [2024-11-19 10:59:46.607887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.674 [2024-11-19 10:59:46.618989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.674 [2024-11-19 10:59:46.619299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.674 [2024-11-19 10:59:46.619315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.674 [2024-11-19 10:59:46.630612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.674 [2024-11-19 10:59:46.630897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.674 [2024-11-19 10:59:46.630913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.674 [2024-11-19 10:59:46.641647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.641904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.641919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.652659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.652902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.652921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.664438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.664696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.664712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.676170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.676447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.676464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.684052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.684113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.684129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.691546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.691613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.691629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.700027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.700289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.700305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.709679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.709736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.709752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.718770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.718850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.718865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.728145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.728226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.728245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.736173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.736412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.736428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.744910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.745201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.745218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.755523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.755790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.755806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.764898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.765133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.765149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.774936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.775127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.775142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.783895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.784234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.784250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.788252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.788447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.788462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.795364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.795643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.795660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.802872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.803221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.803238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.810469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.810734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.810749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.817458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.817770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.817786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.825942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.826196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.826212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.834218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.834411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.834426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.843000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.843261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.843277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.851762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.852077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.852094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.860019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.675 [2024-11-19 10:59:46.860285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.675 [2024-11-19 10:59:46.860300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.675 [2024-11-19 10:59:46.867694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.676 [2024-11-19 10:59:46.867959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.676 [2024-11-19 10:59:46.867975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.936 [2024-11-19 10:59:46.874056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.936 [2024-11-19 10:59:46.874260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.936 [2024-11-19 10:59:46.874280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.936 [2024-11-19 10:59:46.880565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.936 [2024-11-19 10:59:46.880790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.936 [2024-11-19 10:59:46.880805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.936 [2024-11-19 10:59:46.889522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.936 [2024-11-19 10:59:46.889818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.936 [2024-11-19 10:59:46.889835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.936 [2024-11-19 10:59:46.897912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.936 [2024-11-19 10:59:46.898228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.936 [2024-11-19 10:59:46.898245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.936 [2024-11-19 10:59:46.907440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.936 [2024-11-19 10:59:46.907763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.936 [2024-11-19 10:59:46.907780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.936 [2024-11-19 10:59:46.914002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.936 [2024-11-19 10:59:46.914206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.936 [2024-11-19 10:59:46.914223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.936 [2024-11-19 10:59:46.922835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.936 [2024-11-19 10:59:46.923026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.936 [2024-11-19 10:59:46.923042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.936 3248.00 IOPS, 406.00 MiB/s [2024-11-19T09:59:47.131Z] [2024-11-19 10:59:46.933788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.936 [2024-11-19 10:59:46.934112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.936 [2024-11-19 10:59:46.934129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.936 [2024-11-19 10:59:46.943675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.936 [2024-11-19 10:59:46.943735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.936 [2024-11-19 10:59:46.943750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.936 [2024-11-19 10:59:46.952439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.936 [2024-11-19 10:59:46.952763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.936 [2024-11-19 10:59:46.952780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.936 [2024-11-19 10:59:46.961023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.936 [2024-11-19 10:59:46.961328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.936 [2024-11-19 10:59:46.961345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:46.966818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:46.967041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:46.967056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:46.976050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:46.976371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:46.976388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:46.984567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:46.984883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:46.984900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:46.992156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:46.992489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:46.992506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:46.998770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:46.999147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:46.999168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.006751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.007066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.007083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.015048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.015373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.015390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.022062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.022377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.022394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.028406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.028594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.028610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.032274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.032464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.032480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.035915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.036102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.036118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.040084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.040278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.040294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.043991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.044184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.044200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.048060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.048255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.048271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.054193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.054381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.054397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.061150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.061379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.061398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.070167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.070596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.070614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.078823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.079023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.079039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.086566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.086867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.086884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.094735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.094787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.094803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.100512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.100700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.100716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.104427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.104614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.104630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.108294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.108484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.108500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.115263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.115469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.115485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.122884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.123214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.123231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.937 [2024-11-19 10:59:47.127354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:07.937 [2024-11-19 10:59:47.127545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.937 [2024-11-19 10:59:47.127561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.131345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.131533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.131549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.136173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.136363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.136380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.140029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.140221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.140237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.143840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.144028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.144044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.147210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.147399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.147415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.151288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.151488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.151504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.154802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.154994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.155009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.158901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.159089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.159105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.166032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.166240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.166256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.171916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.172263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.172280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.177209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.177399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.177416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.181216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.181404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.181420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.186112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.186315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.186332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.190373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.190562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.190578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.195131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.195335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.195352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.199379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.199580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.199600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.203152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.203350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.203366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.207142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.207338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.207355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.210954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.211143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.211164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.215022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.215214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.215231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.219213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.219402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.199 [2024-11-19 10:59:47.219419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.199 [2024-11-19 10:59:47.223116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.199 [2024-11-19 10:59:47.223310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.223326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.226852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.227040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.227056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.233660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.233977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.233994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.241035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.241348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.241365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.248570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.248896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.248913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.258235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.258526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.258543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.266840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.267170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.267187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.272422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.272724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.272741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.278650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.278840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.278856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.282636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.282826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.282842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.290394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.290581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.290597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.294760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.294949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.294966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.299505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.299695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.299711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.306033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.306226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.306242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.312759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.313083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.313100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.317132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.317327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.317343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.320978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.321172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.321188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.325347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.325535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.325551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.333072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.333135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.333150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.338812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.339016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.339032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.347397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.347722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.347745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.354411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.354600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.354616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.357989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.358038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.358054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.362801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.363001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.363018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.368489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.368743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.368758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.373955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.374144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.374167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.378254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.378446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.378462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.382095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.382301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.382317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.386209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.386400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.386416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.200 [2024-11-19 10:59:47.390320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.200 [2024-11-19 10:59:47.390513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.200 [2024-11-19 10:59:47.390530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.461 [2024-11-19 10:59:47.394038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.461 [2024-11-19 10:59:47.394233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.461 [2024-11-19 10:59:47.394249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.461 [2024-11-19 10:59:47.398002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.461 [2024-11-19 10:59:47.398199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.461 [2024-11-19 10:59:47.398216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.461 [2024-11-19 10:59:47.402049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.461 [2024-11-19 10:59:47.402243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.461 [2024-11-19 10:59:47.402259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.461 [2024-11-19 10:59:47.405962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.461 [2024-11-19 10:59:47.406151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.461 [2024-11-19 10:59:47.406237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.461 [2024-11-19 10:59:47.412196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.461 [2024-11-19 10:59:47.412504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.461 [2024-11-19 10:59:47.412521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.461 [2024-11-19 10:59:47.419653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.461 [2024-11-19 10:59:47.419855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.461 [2024-11-19 10:59:47.419871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.461 [2024-11-19 10:59:47.426405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.461 [2024-11-19 10:59:47.426726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.461 [2024-11-19 10:59:47.426743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.461 [2024-11-19 10:59:47.434378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.461 [2024-11-19 10:59:47.434702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.461 [2024-11-19 10:59:47.434719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.461 [2024-11-19 10:59:47.441987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.461 [2024-11-19 10:59:47.442300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.461 [2024-11-19 10:59:47.442318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.461 [2024-11-19 10:59:47.448930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.449120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.449137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.453929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.454117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.454134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.462187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.462500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.462517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.468373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.468563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.468579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.473653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.473852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.473868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.477402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.477591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.477607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.482052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.482245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.482262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.489720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.489919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.489939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.494568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.494758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.494774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.498322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.498512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.498528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.502122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.502175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.502190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.506156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.506360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.506376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.513339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.513643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.513660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.520672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.520898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.520913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.525188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.525378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.525394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.529016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.529207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.529223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.532786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.532981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.532997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.536356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.536545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.536562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.539964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.540153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.540176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.543972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.544162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.544179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.548816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.549006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.549023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.553146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.553340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.553357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.556790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.556979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.556995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.560369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.560558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.560574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.564045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.564240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.564256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.567661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.567848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.567864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.462 [2024-11-19 10:59:47.572509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.462 [2024-11-19 10:59:47.572704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.462 [2024-11-19 10:59:47.572720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.463 [2024-11-19 10:59:47.576809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.463 [2024-11-19 10:59:47.576853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.463 [2024-11-19 10:59:47.576868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.463 [2024-11-19 10:59:47.584660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.463 [2024-11-19 10:59:47.584955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.463 [2024-11-19 10:59:47.584972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.463 [2024-11-19 10:59:47.588727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.463 [2024-11-19 10:59:47.588773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.463 [2024-11-19 10:59:47.588787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.463 [2024-11-19 10:59:47.594778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.463 [2024-11-19 10:59:47.594969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.463 [2024-11-19 10:59:47.594985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.463 [2024-11-19 10:59:47.601513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.463 [2024-11-19 10:59:47.601700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.463 [2024-11-19 10:59:47.601717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.463 [2024-11-19 10:59:47.609536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.463 [2024-11-19 10:59:47.609846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.463 [2024-11-19 10:59:47.609864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.463 [2024-11-19 10:59:47.618132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.463 [2024-11-19 10:59:47.618471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.463 [2024-11-19 10:59:47.618491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.463 [2024-11-19 10:59:47.623009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.463 [2024-11-19 10:59:47.623203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.463 [2024-11-19 10:59:47.623219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.463 [2024-11-19 10:59:47.626923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.463 [2024-11-19 10:59:47.627111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.463 [2024-11-19 10:59:47.627127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.463 [2024-11-19 10:59:47.631599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.463 [2024-11-19 10:59:47.631789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.463 [2024-11-19 10:59:47.631806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.463 [2024-11-19 10:59:47.635348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.463 [2024-11-19 10:59:47.635537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.463 [2024-11-19 10:59:47.635553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.463 [2024-11-19 10:59:47.639094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.463 [2024-11-19 10:59:47.639287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.463 [2024-11-19 10:59:47.639303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.463 [2024-11-19 10:59:47.642329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.463 [2024-11-19 10:59:47.642517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.463 [2024-11-19 10:59:47.642533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.463 [2024-11-19 10:59:47.645894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.463 [2024-11-19 10:59:47.646083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.463 [2024-11-19 10:59:47.646099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.463 [2024-11-19 10:59:47.650811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.463 [2024-11-19 10:59:47.651091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.463 [2024-11-19 10:59:47.651108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.657513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.657832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.657849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.663126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.663450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.663468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.671106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.671461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.671478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.679954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.680282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.680299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.684227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.684418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.684435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.687936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.688124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.688140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.691527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.691716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.691732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.698671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.698907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.698923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.704747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.705057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.705082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.709649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.709838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.709855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.713296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.713488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.713505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.716971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.717164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.717181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.720694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.720883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.720899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.728558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.728872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.728889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.734248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.734435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.734451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.737982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.738174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.738190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.741592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.741779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.741796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.745268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.745468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.745487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.749137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.749339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.749355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.752782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.752969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.752986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.758384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.724 [2024-11-19 10:59:47.758700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.724 [2024-11-19 10:59:47.758717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.724 [2024-11-19 10:59:47.766269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.766594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.766611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.771925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.772298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.772315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.781689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.782003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.782020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.787919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.788108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.788124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.791643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.791832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.791849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.800157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.800467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.800483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.808611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.808898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.808916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.813256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.813444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.813460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.817311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.817499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.817515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.821217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.821404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.821420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.825082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.825274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.825290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.831373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.831698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.831714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.836366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.836557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.836573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.840116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.840310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.840326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.844177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.844365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.844381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.848262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.848451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.848467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.851834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.852023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.852039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.855305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.855494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.855510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.858929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.859116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.859132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.862678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.862866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.862882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.871779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.872089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.872106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.878130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.878326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.878342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.885275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.885475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.885496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.890629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.890817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.890834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.897769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.897960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.897976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.905239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.905578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.905595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.910284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.725 [2024-11-19 10:59:47.910474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.725 [2024-11-19 10:59:47.910491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.725 [2024-11-19 10:59:47.913994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.726 [2024-11-19 10:59:47.914203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.726 [2024-11-19 10:59:47.914218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.986 [2024-11-19 10:59:47.917767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.986 [2024-11-19 10:59:47.917956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.986 [2024-11-19 10:59:47.917972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:08.986 [2024-11-19 10:59:47.923413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.986 [2024-11-19 10:59:47.923603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.986 [2024-11-19 10:59:47.923619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:08.986 [2024-11-19 10:59:47.927419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.986 [2024-11-19 10:59:47.927607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.986 [2024-11-19 10:59:47.927623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:08.986 [2024-11-19 10:59:47.931877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1116860) with pdu=0x2000166ff3c8 00:32:08.986 [2024-11-19 10:59:47.932127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.986 [2024-11-19 10:59:47.932143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.986 4483.50 IOPS, 560.44 MiB/s 00:32:08.986 Latency(us) 00:32:08.986 [2024-11-19T09:59:48.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.986 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:08.986 nvme0n1 : 2.00 4481.86 560.23 0.00 0.00 3564.23 1583.79 11960.32 00:32:08.986 [2024-11-19T09:59:48.181Z] =================================================================================================================== 00:32:08.986 [2024-11-19T09:59:48.181Z] Total : 4481.86 560.23 0.00 0.00 3564.23 1583.79 11960.32 00:32:08.986 { 00:32:08.986 "results": [ 00:32:08.986 { 00:32:08.986 "job": "nvme0n1", 00:32:08.986 "core_mask": "0x2", 00:32:08.986 "workload": "randwrite", 00:32:08.986 "status": "finished", 00:32:08.986 "queue_depth": 16, 00:32:08.986 "io_size": 131072, 00:32:08.986 "runtime": 2.004302, 00:32:08.986 "iops": 4481.85952017211, 00:32:08.986 "mibps": 560.2324400215138, 00:32:08.986 "io_failed": 0, 00:32:08.986 "io_timeout": 0, 00:32:08.986 "avg_latency_us": 3564.2309369549894, 00:32:08.986 "min_latency_us": 1583.7866666666666, 00:32:08.986 "max_latency_us": 11960.32 00:32:08.986 } 00:32:08.986 ], 00:32:08.986 "core_count": 1 00:32:08.986 } 00:32:08.986 10:59:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:08.986 10:59:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:08.986 10:59:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:08.986 | .driver_specific 00:32:08.986 | .nvme_error 00:32:08.986 | .status_code 00:32:08.986 | .command_transient_transport_error' 00:32:08.986 10:59:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:08.986 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 290 > 0 )) 00:32:08.986 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1200548 00:32:08.986 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1200548 ']' 00:32:08.986 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1200548 00:32:08.986 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:08.986 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:08.986 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1200548 00:32:09.246 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:09.246 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:09.246 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1200548' 00:32:09.246 killing process with pid 1200548 00:32:09.247 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1200548 00:32:09.247 Received shutdown signal, test time was about 2.000000 seconds 00:32:09.247 00:32:09.247 Latency(us) 00:32:09.247 [2024-11-19T09:59:48.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.247 [2024-11-19T09:59:48.442Z] =================================================================================================================== 00:32:09.247 [2024-11-19T09:59:48.442Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:09.247 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1200548 00:32:09.247 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1198086 00:32:09.247 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1198086 ']' 00:32:09.247 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1198086 00:32:09.247 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:09.247 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:09.247 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1198086 00:32:09.247 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:09.247 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:09.247 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1198086' 00:32:09.247 killing process with pid 1198086 00:32:09.247 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1198086 00:32:09.247 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1198086 00:32:09.507 00:32:09.507 real 0m16.520s 00:32:09.507 user 0m32.827s 00:32:09.507 sys 0m3.528s 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:09.507 ************************************ 00:32:09.507 END TEST nvmf_digest_error 00:32:09.507 ************************************ 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:09.507 rmmod nvme_tcp 00:32:09.507 rmmod nvme_fabrics 00:32:09.507 rmmod nvme_keyring 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1198086 ']' 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1198086 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1198086 ']' 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1198086 00:32:09.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1198086) - No such process 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1198086 is not found' 00:32:09.507 Process with pid 1198086 is not found 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:09.507 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.050 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:12.050 00:32:12.050 real 0m43.343s 00:32:12.050 user 1m8.204s 00:32:12.050 sys 0m13.022s 00:32:12.050 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:12.050 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:12.050 ************************************ 00:32:12.050 END TEST nvmf_digest 00:32:12.050 ************************************ 00:32:12.050 10:59:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:32:12.050 10:59:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:32:12.050 10:59:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:32:12.050 10:59:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:12.050 10:59:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:12.050 10:59:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:12.050 10:59:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.050 ************************************ 00:32:12.050 START TEST nvmf_bdevperf 00:32:12.050 ************************************ 00:32:12.050 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:12.050 * Looking for test storage... 00:32:12.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:12.050 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:12.050 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:32:12.050 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:12.050 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:12.050 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:12.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.051 --rc genhtml_branch_coverage=1 00:32:12.051 --rc genhtml_function_coverage=1 00:32:12.051 --rc genhtml_legend=1 00:32:12.051 --rc geninfo_all_blocks=1 00:32:12.051 --rc geninfo_unexecuted_blocks=1 00:32:12.051 00:32:12.051 ' 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:12.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.051 --rc genhtml_branch_coverage=1 00:32:12.051 --rc genhtml_function_coverage=1 00:32:12.051 --rc genhtml_legend=1 00:32:12.051 --rc geninfo_all_blocks=1 00:32:12.051 --rc geninfo_unexecuted_blocks=1 00:32:12.051 00:32:12.051 ' 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:12.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.051 --rc genhtml_branch_coverage=1 00:32:12.051 --rc genhtml_function_coverage=1 00:32:12.051 --rc genhtml_legend=1 00:32:12.051 --rc geninfo_all_blocks=1 00:32:12.051 --rc geninfo_unexecuted_blocks=1 00:32:12.051 00:32:12.051 ' 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:12.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.051 --rc genhtml_branch_coverage=1 00:32:12.051 --rc genhtml_function_coverage=1 00:32:12.051 --rc genhtml_legend=1 00:32:12.051 --rc geninfo_all_blocks=1 00:32:12.051 --rc geninfo_unexecuted_blocks=1 00:32:12.051 00:32:12.051 ' 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:12.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.051 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:12.052 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.052 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:12.052 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:12.052 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:32:12.052 10:59:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:20.189 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:20.189 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:20.189 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:20.189 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:20.189 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:20.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:20.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:32:20.190 00:32:20.190 --- 10.0.0.2 ping statistics --- 00:32:20.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.190 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:20.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:20.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:32:20.190 00:32:20.190 --- 10.0.0.1 ping statistics --- 00:32:20.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.190 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1205477 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1205477 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1205477 ']' 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:20.190 10:59:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:20.190 [2024-11-19 10:59:58.610072] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:32:20.190 [2024-11-19 10:59:58.610135] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:20.190 [2024-11-19 10:59:58.708693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:20.190 [2024-11-19 10:59:58.761053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:20.190 [2024-11-19 10:59:58.761105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:20.190 [2024-11-19 10:59:58.761114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:20.190 [2024-11-19 10:59:58.761121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:20.190 [2024-11-19 10:59:58.761127] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:20.190 [2024-11-19 10:59:58.762958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:20.190 [2024-11-19 10:59:58.763117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.190 [2024-11-19 10:59:58.763120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:20.451 [2024-11-19 10:59:59.490173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:20.451 Malloc0 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:20.451 [2024-11-19 10:59:59.563573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:20.451 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:20.451 { 00:32:20.451 "params": { 00:32:20.452 "name": "Nvme$subsystem", 00:32:20.452 "trtype": "$TEST_TRANSPORT", 00:32:20.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:20.452 "adrfam": "ipv4", 00:32:20.452 "trsvcid": "$NVMF_PORT", 00:32:20.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:20.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:20.452 "hdgst": ${hdgst:-false}, 00:32:20.452 "ddgst": ${ddgst:-false} 00:32:20.452 }, 00:32:20.452 "method": "bdev_nvme_attach_controller" 00:32:20.452 } 00:32:20.452 EOF 00:32:20.452 )") 00:32:20.452 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:32:20.452 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:32:20.452 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:32:20.452 10:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:20.452 "params": { 00:32:20.452 "name": "Nvme1", 00:32:20.452 "trtype": "tcp", 00:32:20.452 "traddr": "10.0.0.2", 00:32:20.452 "adrfam": "ipv4", 00:32:20.452 "trsvcid": "4420", 00:32:20.452 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:20.452 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:20.452 "hdgst": false, 00:32:20.452 "ddgst": false 00:32:20.452 }, 00:32:20.452 "method": "bdev_nvme_attach_controller" 00:32:20.452 }' 00:32:20.452 [2024-11-19 10:59:59.623931] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:32:20.452 [2024-11-19 10:59:59.624003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1205804 ] 00:32:20.712 [2024-11-19 10:59:59.718802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.712 [2024-11-19 10:59:59.772655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.973 Running I/O for 1 seconds... 00:32:22.359 8602.00 IOPS, 33.60 MiB/s 00:32:22.359 Latency(us) 00:32:22.359 [2024-11-19T10:00:01.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.359 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:22.359 Verification LBA range: start 0x0 length 0x4000 00:32:22.359 Nvme1n1 : 1.01 8656.71 33.82 0.00 0.00 14709.61 1515.52 16165.55 00:32:22.359 [2024-11-19T10:00:01.554Z] =================================================================================================================== 00:32:22.359 [2024-11-19T10:00:01.554Z] Total : 8656.71 33.82 0.00 0.00 14709.61 1515.52 16165.55 00:32:22.359 11:00:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1206207 00:32:22.359 11:00:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:32:22.359 11:00:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:22.359 11:00:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:22.359 11:00:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:32:22.359 11:00:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:32:22.359 11:00:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:22.359 11:00:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:22.359 { 00:32:22.359 "params": { 00:32:22.359 "name": "Nvme$subsystem", 00:32:22.359 "trtype": "$TEST_TRANSPORT", 00:32:22.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:22.359 "adrfam": "ipv4", 00:32:22.359 "trsvcid": "$NVMF_PORT", 00:32:22.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:22.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:22.359 "hdgst": ${hdgst:-false}, 00:32:22.359 "ddgst": ${ddgst:-false} 00:32:22.359 }, 00:32:22.359 "method": "bdev_nvme_attach_controller" 00:32:22.359 } 00:32:22.359 EOF 00:32:22.359 )") 00:32:22.359 11:00:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:32:22.359 11:00:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:32:22.359 11:00:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:32:22.359 11:00:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:22.359 "params": { 00:32:22.359 "name": "Nvme1", 00:32:22.359 "trtype": "tcp", 00:32:22.359 "traddr": "10.0.0.2", 00:32:22.359 "adrfam": "ipv4", 00:32:22.359 "trsvcid": "4420", 00:32:22.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:22.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:22.359 "hdgst": false, 00:32:22.359 "ddgst": false 00:32:22.359 }, 00:32:22.359 "method": "bdev_nvme_attach_controller" 00:32:22.359 }' 00:32:22.359 [2024-11-19 11:00:01.311708] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:32:22.359 [2024-11-19 11:00:01.311766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1206207 ] 00:32:22.359 [2024-11-19 11:00:01.403585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.359 [2024-11-19 11:00:01.438611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.620 Running I/O for 15 seconds... 00:32:24.502 10386.00 IOPS, 40.57 MiB/s [2024-11-19T10:00:04.269Z] 10944.50 IOPS, 42.75 MiB/s [2024-11-19T10:00:04.269Z] 11:00:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1205477 00:32:25.074 11:00:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:32:25.339 [2024-11-19 11:00:04.274463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.339 [2024-11-19 11:00:04.274504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.339 [2024-11-19 11:00:04.274524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.339 [2024-11-19 11:00:04.274541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.339 [2024-11-19 11:00:04.274553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.339 [2024-11-19 11:00:04.274562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.339 [2024-11-19 11:00:04.274573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.339 [2024-11-19 11:00:04.274582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.339 [2024-11-19 11:00:04.274594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.339 [2024-11-19 11:00:04.274604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.339 [2024-11-19 11:00:04.274615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.339 [2024-11-19 11:00:04.274625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.339 [2024-11-19 11:00:04.274636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.339 [2024-11-19 11:00:04.274645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.339 [2024-11-19 11:00:04.274656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.339 [2024-11-19 11:00:04.274669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.339 [2024-11-19 11:00:04.274679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.339 [2024-11-19 11:00:04.274687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.339 [2024-11-19 11:00:04.274698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.339 [2024-11-19 11:00:04.274708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.339 [2024-11-19 11:00:04.274725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.339 [2024-11-19 11:00:04.274733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.339 [2024-11-19 11:00:04.274743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.339 [2024-11-19 11:00:04.274750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.339 [2024-11-19 11:00:04.274763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.339 [2024-11-19 11:00:04.274772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.339 [2024-11-19 11:00:04.274787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.339 [2024-11-19 11:00:04.274794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.339 [2024-11-19 11:00:04.274811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.339 [2024-11-19 11:00:04.274819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.339 [2024-11-19 11:00:04.274830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.339 [2024-11-19 11:00:04.274840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.339 [2024-11-19 11:00:04.274851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.274861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.274872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.274882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.274895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.274904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.274918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.274930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.274941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.274950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.274959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.274969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.274978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.274987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.274996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.340 [2024-11-19 11:00:04.275523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.340 [2024-11-19 11:00:04.275533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.275986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.275994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.276003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.276012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.276022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.276030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.276040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.276047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.276056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.276063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.276073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.276081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.276090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.276098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.276107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.276114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.276123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.276131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.276141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.276148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.341 [2024-11-19 11:00:04.276162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.341 [2024-11-19 11:00:04.276171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.342 [2024-11-19 11:00:04.276779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.342 [2024-11-19 11:00:04.276788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.343 [2024-11-19 11:00:04.276796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.343 [2024-11-19 11:00:04.276805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.343 [2024-11-19 11:00:04.276812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.343 [2024-11-19 11:00:04.276822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.343 [2024-11-19 11:00:04.276830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.343 [2024-11-19 11:00:04.276839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.343 [2024-11-19 11:00:04.276847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.343 [2024-11-19 11:00:04.276856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.343 [2024-11-19 11:00:04.276864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.343 [2024-11-19 11:00:04.276873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.343 [2024-11-19 11:00:04.276881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.343 [2024-11-19 11:00:04.276891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1390 is same with the state(6) to be set 00:32:25.343 [2024-11-19 11:00:04.276900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:25.343 [2024-11-19 11:00:04.276907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.343 [2024-11-19 11:00:04.276913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104232 len:8 PRP1 0x0 PRP2 0x0 00:32:25.343 [2024-11-19 11:00:04.276921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.343 [2024-11-19 11:00:04.280557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.343 [2024-11-19 11:00:04.280611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.343 [2024-11-19 11:00:04.281476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.343 [2024-11-19 11:00:04.281516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.343 [2024-11-19 11:00:04.281527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.343 [2024-11-19 11:00:04.281764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.343 [2024-11-19 11:00:04.281984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.343 [2024-11-19 11:00:04.281993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.343 [2024-11-19 11:00:04.282002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.343 [2024-11-19 11:00:04.282011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.343 [2024-11-19 11:00:04.294555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.343 [2024-11-19 11:00:04.295113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.343 [2024-11-19 11:00:04.295154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.343 [2024-11-19 11:00:04.295178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.343 [2024-11-19 11:00:04.295415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.343 [2024-11-19 11:00:04.295635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.343 [2024-11-19 11:00:04.295644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.343 [2024-11-19 11:00:04.295653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.343 [2024-11-19 11:00:04.295661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.343 [2024-11-19 11:00:04.308412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.343 [2024-11-19 11:00:04.308943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.343 [2024-11-19 11:00:04.308963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.343 [2024-11-19 11:00:04.308972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.343 [2024-11-19 11:00:04.309195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.343 [2024-11-19 11:00:04.309418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.343 [2024-11-19 11:00:04.309427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.343 [2024-11-19 11:00:04.309434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.343 [2024-11-19 11:00:04.309442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.343 [2024-11-19 11:00:04.322205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.343 [2024-11-19 11:00:04.322638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.343 [2024-11-19 11:00:04.322657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.343 [2024-11-19 11:00:04.322665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.343 [2024-11-19 11:00:04.322881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.343 [2024-11-19 11:00:04.323097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.343 [2024-11-19 11:00:04.323106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.343 [2024-11-19 11:00:04.323113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.343 [2024-11-19 11:00:04.323120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.343 [2024-11-19 11:00:04.336073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.343 [2024-11-19 11:00:04.336525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.343 [2024-11-19 11:00:04.336544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.343 [2024-11-19 11:00:04.336552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.343 [2024-11-19 11:00:04.336767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.343 [2024-11-19 11:00:04.336983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.343 [2024-11-19 11:00:04.336992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.343 [2024-11-19 11:00:04.337000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.343 [2024-11-19 11:00:04.337007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.343 [2024-11-19 11:00:04.349976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.343 [2024-11-19 11:00:04.350562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.343 [2024-11-19 11:00:04.350581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.343 [2024-11-19 11:00:04.350589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.343 [2024-11-19 11:00:04.350804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.343 [2024-11-19 11:00:04.351020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.343 [2024-11-19 11:00:04.351030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.343 [2024-11-19 11:00:04.351042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.343 [2024-11-19 11:00:04.351049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.343 [2024-11-19 11:00:04.363799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.343 [2024-11-19 11:00:04.364226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.343 [2024-11-19 11:00:04.364245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.343 [2024-11-19 11:00:04.364253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.343 [2024-11-19 11:00:04.364468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.343 [2024-11-19 11:00:04.364685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.343 [2024-11-19 11:00:04.364694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.343 [2024-11-19 11:00:04.364701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.343 [2024-11-19 11:00:04.364709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.343 [2024-11-19 11:00:04.377674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.343 [2024-11-19 11:00:04.378204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.343 [2024-11-19 11:00:04.378224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.343 [2024-11-19 11:00:04.378232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.343 [2024-11-19 11:00:04.378449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.344 [2024-11-19 11:00:04.378665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.344 [2024-11-19 11:00:04.378676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.344 [2024-11-19 11:00:04.378684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.344 [2024-11-19 11:00:04.378691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.344 [2024-11-19 11:00:04.391451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.344 [2024-11-19 11:00:04.391979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.344 [2024-11-19 11:00:04.391999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.344 [2024-11-19 11:00:04.392007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.344 [2024-11-19 11:00:04.392231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.344 [2024-11-19 11:00:04.392449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.344 [2024-11-19 11:00:04.392460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.344 [2024-11-19 11:00:04.392468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.344 [2024-11-19 11:00:04.392475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.344 [2024-11-19 11:00:04.405235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.344 [2024-11-19 11:00:04.405770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.344 [2024-11-19 11:00:04.405790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.344 [2024-11-19 11:00:04.405798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.344 [2024-11-19 11:00:04.406014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.344 [2024-11-19 11:00:04.406238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.344 [2024-11-19 11:00:04.406249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.344 [2024-11-19 11:00:04.406257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.344 [2024-11-19 11:00:04.406264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.344 [2024-11-19 11:00:04.419034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.344 [2024-11-19 11:00:04.419688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.344 [2024-11-19 11:00:04.419741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.344 [2024-11-19 11:00:04.419753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.344 [2024-11-19 11:00:04.419995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.344 [2024-11-19 11:00:04.420231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.344 [2024-11-19 11:00:04.420244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.344 [2024-11-19 11:00:04.420253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.344 [2024-11-19 11:00:04.420262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.344 [2024-11-19 11:00:04.432842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.344 [2024-11-19 11:00:04.433402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.344 [2024-11-19 11:00:04.433429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.344 [2024-11-19 11:00:04.433438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.344 [2024-11-19 11:00:04.433657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.344 [2024-11-19 11:00:04.433875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.344 [2024-11-19 11:00:04.433888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.344 [2024-11-19 11:00:04.433896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.344 [2024-11-19 11:00:04.433903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.344 [2024-11-19 11:00:04.446744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.344 [2024-11-19 11:00:04.447315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.344 [2024-11-19 11:00:04.447340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.344 [2024-11-19 11:00:04.447355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.344 [2024-11-19 11:00:04.447573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.344 [2024-11-19 11:00:04.447792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.344 [2024-11-19 11:00:04.447805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.344 [2024-11-19 11:00:04.447813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.344 [2024-11-19 11:00:04.447821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.344 [2024-11-19 11:00:04.460620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.344 [2024-11-19 11:00:04.461231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.344 [2024-11-19 11:00:04.461256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.344 [2024-11-19 11:00:04.461265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.344 [2024-11-19 11:00:04.461483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.344 [2024-11-19 11:00:04.461703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.344 [2024-11-19 11:00:04.461714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.344 [2024-11-19 11:00:04.461723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.344 [2024-11-19 11:00:04.461732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.344 [2024-11-19 11:00:04.474540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.344 [2024-11-19 11:00:04.475149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.344 [2024-11-19 11:00:04.475183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.344 [2024-11-19 11:00:04.475193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.344 [2024-11-19 11:00:04.475411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.344 [2024-11-19 11:00:04.475630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.344 [2024-11-19 11:00:04.475642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.344 [2024-11-19 11:00:04.475650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.344 [2024-11-19 11:00:04.475658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.344 [2024-11-19 11:00:04.488469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.345 [2024-11-19 11:00:04.489026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.345 [2024-11-19 11:00:04.489051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.345 [2024-11-19 11:00:04.489060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.345 [2024-11-19 11:00:04.489289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.345 [2024-11-19 11:00:04.489516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.345 [2024-11-19 11:00:04.489529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.345 [2024-11-19 11:00:04.489537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.345 [2024-11-19 11:00:04.489545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.345 [2024-11-19 11:00:04.502349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.345 [2024-11-19 11:00:04.502914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.345 [2024-11-19 11:00:04.502980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.345 [2024-11-19 11:00:04.502994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.345 [2024-11-19 11:00:04.503261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.345 [2024-11-19 11:00:04.503488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.345 [2024-11-19 11:00:04.503500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.345 [2024-11-19 11:00:04.503509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.345 [2024-11-19 11:00:04.503519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.345 [2024-11-19 11:00:04.516127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.345 [2024-11-19 11:00:04.516745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.345 [2024-11-19 11:00:04.516777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.345 [2024-11-19 11:00:04.516787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.345 [2024-11-19 11:00:04.517007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.345 [2024-11-19 11:00:04.517241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.345 [2024-11-19 11:00:04.517252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.345 [2024-11-19 11:00:04.517262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.345 [2024-11-19 11:00:04.517271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.608 [2024-11-19 11:00:04.530089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.608 [2024-11-19 11:00:04.530652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.608 [2024-11-19 11:00:04.530680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.608 [2024-11-19 11:00:04.530689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.608 [2024-11-19 11:00:04.530908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.608 [2024-11-19 11:00:04.531130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.608 [2024-11-19 11:00:04.531142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.608 [2024-11-19 11:00:04.531150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.608 [2024-11-19 11:00:04.531180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.608 [2024-11-19 11:00:04.544003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.608 [2024-11-19 11:00:04.544620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.608 [2024-11-19 11:00:04.544647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.608 [2024-11-19 11:00:04.544656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.608 [2024-11-19 11:00:04.544876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.608 [2024-11-19 11:00:04.545094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.608 [2024-11-19 11:00:04.545104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.608 [2024-11-19 11:00:04.545113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.608 [2024-11-19 11:00:04.545121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.608 [2024-11-19 11:00:04.557928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.608 [2024-11-19 11:00:04.558554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.608 [2024-11-19 11:00:04.558581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.608 [2024-11-19 11:00:04.558590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.608 [2024-11-19 11:00:04.558808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.608 [2024-11-19 11:00:04.559027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.608 [2024-11-19 11:00:04.559038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.608 [2024-11-19 11:00:04.559049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.608 [2024-11-19 11:00:04.559058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.608 [2024-11-19 11:00:04.571873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.608 [2024-11-19 11:00:04.572444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.608 [2024-11-19 11:00:04.572470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.608 [2024-11-19 11:00:04.572480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.608 [2024-11-19 11:00:04.572699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.608 [2024-11-19 11:00:04.572918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.608 [2024-11-19 11:00:04.572931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.608 [2024-11-19 11:00:04.572940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.608 [2024-11-19 11:00:04.572947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.608 [2024-11-19 11:00:04.585762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.608 [2024-11-19 11:00:04.586389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.608 [2024-11-19 11:00:04.586416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.608 [2024-11-19 11:00:04.586425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.608 [2024-11-19 11:00:04.586643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.608 [2024-11-19 11:00:04.586862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.608 [2024-11-19 11:00:04.586875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.608 [2024-11-19 11:00:04.586884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.608 [2024-11-19 11:00:04.586891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.608 [2024-11-19 11:00:04.599704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.608 [2024-11-19 11:00:04.600284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.608 [2024-11-19 11:00:04.600351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.608 [2024-11-19 11:00:04.600365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.608 [2024-11-19 11:00:04.600617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.608 [2024-11-19 11:00:04.600842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.608 [2024-11-19 11:00:04.600853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.609 [2024-11-19 11:00:04.600862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.609 [2024-11-19 11:00:04.600872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.609 [2024-11-19 11:00:04.613443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.609 [2024-11-19 11:00:04.614035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.609 [2024-11-19 11:00:04.614066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.609 [2024-11-19 11:00:04.614076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.609 [2024-11-19 11:00:04.614304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.609 [2024-11-19 11:00:04.614525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.609 [2024-11-19 11:00:04.614538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.609 [2024-11-19 11:00:04.614547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.609 [2024-11-19 11:00:04.614556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.609 [2024-11-19 11:00:04.627330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.609 [2024-11-19 11:00:04.627942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.609 [2024-11-19 11:00:04.627969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.609 [2024-11-19 11:00:04.627978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.609 [2024-11-19 11:00:04.628219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.609 [2024-11-19 11:00:04.628439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.609 [2024-11-19 11:00:04.628449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.609 [2024-11-19 11:00:04.628458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.609 [2024-11-19 11:00:04.628466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.609 9762.00 IOPS, 38.13 MiB/s [2024-11-19T10:00:04.804Z] [2024-11-19 11:00:04.642269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.609 [2024-11-19 11:00:04.642876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.609 [2024-11-19 11:00:04.642903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.609 [2024-11-19 11:00:04.642912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.609 [2024-11-19 11:00:04.643132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.609 [2024-11-19 11:00:04.643363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.609 [2024-11-19 11:00:04.643374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.609 [2024-11-19 11:00:04.643382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.609 [2024-11-19 11:00:04.643391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.609 [2024-11-19 11:00:04.656135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.609 [2024-11-19 11:00:04.656798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.609 [2024-11-19 11:00:04.656863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.609 [2024-11-19 11:00:04.656876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.609 [2024-11-19 11:00:04.657128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.609 [2024-11-19 11:00:04.657371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.609 [2024-11-19 11:00:04.657384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.609 [2024-11-19 11:00:04.657393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.609 [2024-11-19 11:00:04.657403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.609 [2024-11-19 11:00:04.669968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.609 [2024-11-19 11:00:04.670702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.609 [2024-11-19 11:00:04.670767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.609 [2024-11-19 11:00:04.670781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.609 [2024-11-19 11:00:04.671033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.609 [2024-11-19 11:00:04.671274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.609 [2024-11-19 11:00:04.671293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.609 [2024-11-19 11:00:04.671303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.609 [2024-11-19 11:00:04.671313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.609 [2024-11-19 11:00:04.683913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.609 [2024-11-19 11:00:04.684550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.609 [2024-11-19 11:00:04.684582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.609 [2024-11-19 11:00:04.684591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.609 [2024-11-19 11:00:04.684811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.609 [2024-11-19 11:00:04.685031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.609 [2024-11-19 11:00:04.685043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.609 [2024-11-19 11:00:04.685052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.609 [2024-11-19 11:00:04.685062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.609 [2024-11-19 11:00:04.697991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.609 [2024-11-19 11:00:04.698600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.609 [2024-11-19 11:00:04.698628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.609 [2024-11-19 11:00:04.698638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.609 [2024-11-19 11:00:04.698858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.609 [2024-11-19 11:00:04.699077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.609 [2024-11-19 11:00:04.699090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.609 [2024-11-19 11:00:04.699099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.609 [2024-11-19 11:00:04.699108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.609 [2024-11-19 11:00:04.711901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.609 [2024-11-19 11:00:04.712481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.609 [2024-11-19 11:00:04.712508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.609 [2024-11-19 11:00:04.712518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.609 [2024-11-19 11:00:04.712737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.609 [2024-11-19 11:00:04.712957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.609 [2024-11-19 11:00:04.712970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.609 [2024-11-19 11:00:04.712979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.609 [2024-11-19 11:00:04.712995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.609 [2024-11-19 11:00:04.725802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.609 [2024-11-19 11:00:04.726510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.609 [2024-11-19 11:00:04.726576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.609 [2024-11-19 11:00:04.726589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.609 [2024-11-19 11:00:04.726842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.609 [2024-11-19 11:00:04.727067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.609 [2024-11-19 11:00:04.727078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.609 [2024-11-19 11:00:04.727089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.609 [2024-11-19 11:00:04.727099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.609 [2024-11-19 11:00:04.739696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.609 [2024-11-19 11:00:04.740392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.609 [2024-11-19 11:00:04.740458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.609 [2024-11-19 11:00:04.740471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.609 [2024-11-19 11:00:04.740723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.610 [2024-11-19 11:00:04.740948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.610 [2024-11-19 11:00:04.740960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.610 [2024-11-19 11:00:04.740969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.610 [2024-11-19 11:00:04.740979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.610 [2024-11-19 11:00:04.753556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.610 [2024-11-19 11:00:04.754251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.610 [2024-11-19 11:00:04.754318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.610 [2024-11-19 11:00:04.754331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.610 [2024-11-19 11:00:04.754583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.610 [2024-11-19 11:00:04.754808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.610 [2024-11-19 11:00:04.754821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.610 [2024-11-19 11:00:04.754830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.610 [2024-11-19 11:00:04.754840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.610 [2024-11-19 11:00:04.767408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.610 [2024-11-19 11:00:04.768097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.610 [2024-11-19 11:00:04.768175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.610 [2024-11-19 11:00:04.768189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.610 [2024-11-19 11:00:04.768441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.610 [2024-11-19 11:00:04.768665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.610 [2024-11-19 11:00:04.768677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.610 [2024-11-19 11:00:04.768686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.610 [2024-11-19 11:00:04.768695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.610 [2024-11-19 11:00:04.781276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.610 [2024-11-19 11:00:04.782013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.610 [2024-11-19 11:00:04.782080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.610 [2024-11-19 11:00:04.782093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.610 [2024-11-19 11:00:04.782361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.610 [2024-11-19 11:00:04.782587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.610 [2024-11-19 11:00:04.782599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.610 [2024-11-19 11:00:04.782608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.610 [2024-11-19 11:00:04.782618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.610 [2024-11-19 11:00:04.795219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.610 [2024-11-19 11:00:04.795849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.610 [2024-11-19 11:00:04.795880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.610 [2024-11-19 11:00:04.795889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.610 [2024-11-19 11:00:04.796109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.610 [2024-11-19 11:00:04.796341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.610 [2024-11-19 11:00:04.796355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.610 [2024-11-19 11:00:04.796363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.610 [2024-11-19 11:00:04.796372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.873 [2024-11-19 11:00:04.809169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.873 [2024-11-19 11:00:04.809781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.873 [2024-11-19 11:00:04.809807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.873 [2024-11-19 11:00:04.809816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.873 [2024-11-19 11:00:04.810042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.873 [2024-11-19 11:00:04.810272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.873 [2024-11-19 11:00:04.810284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.873 [2024-11-19 11:00:04.810293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.873 [2024-11-19 11:00:04.810302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.873 [2024-11-19 11:00:04.823097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.873 [2024-11-19 11:00:04.823712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.873 [2024-11-19 11:00:04.823738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.873 [2024-11-19 11:00:04.823748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.873 [2024-11-19 11:00:04.823965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.873 [2024-11-19 11:00:04.824193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.873 [2024-11-19 11:00:04.824205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.873 [2024-11-19 11:00:04.824214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.873 [2024-11-19 11:00:04.824223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.873 [2024-11-19 11:00:04.836986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.873 [2024-11-19 11:00:04.837589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.873 [2024-11-19 11:00:04.837614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.873 [2024-11-19 11:00:04.837623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.873 [2024-11-19 11:00:04.837842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.873 [2024-11-19 11:00:04.838061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.873 [2024-11-19 11:00:04.838071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.873 [2024-11-19 11:00:04.838079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.873 [2024-11-19 11:00:04.838089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.873 [2024-11-19 11:00:04.850869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.873 [2024-11-19 11:00:04.851566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.873 [2024-11-19 11:00:04.851632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.873 [2024-11-19 11:00:04.851645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.873 [2024-11-19 11:00:04.851897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.873 [2024-11-19 11:00:04.852122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.873 [2024-11-19 11:00:04.852140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.873 [2024-11-19 11:00:04.852149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.873 [2024-11-19 11:00:04.852175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.874 [2024-11-19 11:00:04.864724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.874 [2024-11-19 11:00:04.865354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.874 [2024-11-19 11:00:04.865420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.874 [2024-11-19 11:00:04.865433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.874 [2024-11-19 11:00:04.865686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.874 [2024-11-19 11:00:04.865910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.874 [2024-11-19 11:00:04.865922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.874 [2024-11-19 11:00:04.865932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.874 [2024-11-19 11:00:04.865941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.874 [2024-11-19 11:00:04.878509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.874 [2024-11-19 11:00:04.879213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.874 [2024-11-19 11:00:04.879280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.874 [2024-11-19 11:00:04.879294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.874 [2024-11-19 11:00:04.879546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.874 [2024-11-19 11:00:04.879771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.874 [2024-11-19 11:00:04.879783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.874 [2024-11-19 11:00:04.879792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.874 [2024-11-19 11:00:04.879801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.874 [2024-11-19 11:00:04.892358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.874 [2024-11-19 11:00:04.893038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.874 [2024-11-19 11:00:04.893104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.874 [2024-11-19 11:00:04.893117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.874 [2024-11-19 11:00:04.893383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.874 [2024-11-19 11:00:04.893610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.874 [2024-11-19 11:00:04.893621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.874 [2024-11-19 11:00:04.893630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.874 [2024-11-19 11:00:04.893646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.874 [2024-11-19 11:00:04.906198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.874 [2024-11-19 11:00:04.906909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.874 [2024-11-19 11:00:04.906977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.874 [2024-11-19 11:00:04.906990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.874 [2024-11-19 11:00:04.907255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.874 [2024-11-19 11:00:04.907481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.874 [2024-11-19 11:00:04.907492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.874 [2024-11-19 11:00:04.907501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.874 [2024-11-19 11:00:04.907511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.874 [2024-11-19 11:00:04.920079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.874 [2024-11-19 11:00:04.920697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.874 [2024-11-19 11:00:04.920765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.874 [2024-11-19 11:00:04.920778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.874 [2024-11-19 11:00:04.921029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.874 [2024-11-19 11:00:04.921268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.874 [2024-11-19 11:00:04.921293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.874 [2024-11-19 11:00:04.921302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.874 [2024-11-19 11:00:04.921311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.874 [2024-11-19 11:00:04.933861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.874 [2024-11-19 11:00:04.934574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.874 [2024-11-19 11:00:04.934639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.874 [2024-11-19 11:00:04.934652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.874 [2024-11-19 11:00:04.934904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.874 [2024-11-19 11:00:04.935128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.874 [2024-11-19 11:00:04.935140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.874 [2024-11-19 11:00:04.935149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.874 [2024-11-19 11:00:04.935172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.874 [2024-11-19 11:00:04.947738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.874 [2024-11-19 11:00:04.948431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.874 [2024-11-19 11:00:04.948503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.874 [2024-11-19 11:00:04.948517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.874 [2024-11-19 11:00:04.948768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.874 [2024-11-19 11:00:04.948993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.874 [2024-11-19 11:00:04.949004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.874 [2024-11-19 11:00:04.949013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.874 [2024-11-19 11:00:04.949022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.874 [2024-11-19 11:00:04.961585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.874 [2024-11-19 11:00:04.962208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.874 [2024-11-19 11:00:04.962240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.874 [2024-11-19 11:00:04.962250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.874 [2024-11-19 11:00:04.962470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.874 [2024-11-19 11:00:04.962690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.874 [2024-11-19 11:00:04.962700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.874 [2024-11-19 11:00:04.962708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.874 [2024-11-19 11:00:04.962717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.874 [2024-11-19 11:00:04.975462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.874 [2024-11-19 11:00:04.976008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.874 [2024-11-19 11:00:04.976074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.874 [2024-11-19 11:00:04.976087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.874 [2024-11-19 11:00:04.976353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.874 [2024-11-19 11:00:04.976579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.874 [2024-11-19 11:00:04.976591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.874 [2024-11-19 11:00:04.976600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.874 [2024-11-19 11:00:04.976610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.874 [2024-11-19 11:00:04.989357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.874 [2024-11-19 11:00:04.990060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.874 [2024-11-19 11:00:04.990126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.874 [2024-11-19 11:00:04.990140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.874 [2024-11-19 11:00:04.990412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.874 [2024-11-19 11:00:04.990639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.874 [2024-11-19 11:00:04.990650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.874 [2024-11-19 11:00:04.990660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.875 [2024-11-19 11:00:04.990669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.875 [2024-11-19 11:00:05.003481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.875 [2024-11-19 11:00:05.004202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.875 [2024-11-19 11:00:05.004269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.875 [2024-11-19 11:00:05.004282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.875 [2024-11-19 11:00:05.004534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.875 [2024-11-19 11:00:05.004759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.875 [2024-11-19 11:00:05.004770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.875 [2024-11-19 11:00:05.004779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.875 [2024-11-19 11:00:05.004789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.875 [2024-11-19 11:00:05.017370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.875 [2024-11-19 11:00:05.018048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.875 [2024-11-19 11:00:05.018114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.875 [2024-11-19 11:00:05.018128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.875 [2024-11-19 11:00:05.018395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.875 [2024-11-19 11:00:05.018620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.875 [2024-11-19 11:00:05.018632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.875 [2024-11-19 11:00:05.018641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.875 [2024-11-19 11:00:05.018651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.875 [2024-11-19 11:00:05.031206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.875 [2024-11-19 11:00:05.031903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.875 [2024-11-19 11:00:05.031969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.875 [2024-11-19 11:00:05.031982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.875 [2024-11-19 11:00:05.032250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.875 [2024-11-19 11:00:05.032477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.875 [2024-11-19 11:00:05.032496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.875 [2024-11-19 11:00:05.032505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.875 [2024-11-19 11:00:05.032515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.875 [2024-11-19 11:00:05.045098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.875 [2024-11-19 11:00:05.045669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.875 [2024-11-19 11:00:05.045737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.875 [2024-11-19 11:00:05.045750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.875 [2024-11-19 11:00:05.046001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.875 [2024-11-19 11:00:05.046238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.875 [2024-11-19 11:00:05.046251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.875 [2024-11-19 11:00:05.046260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.875 [2024-11-19 11:00:05.046270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.875 [2024-11-19 11:00:05.059020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.875 [2024-11-19 11:00:05.059708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.875 [2024-11-19 11:00:05.059774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:25.875 [2024-11-19 11:00:05.059788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:25.875 [2024-11-19 11:00:05.060039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:25.875 [2024-11-19 11:00:05.060279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.875 [2024-11-19 11:00:05.060292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.875 [2024-11-19 11:00:05.060302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.875 [2024-11-19 11:00:05.060311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.137 [2024-11-19 11:00:05.072865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.137 [2024-11-19 11:00:05.073559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.137 [2024-11-19 11:00:05.073625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.137 [2024-11-19 11:00:05.073639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.137 [2024-11-19 11:00:05.073891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.137 [2024-11-19 11:00:05.074115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.137 [2024-11-19 11:00:05.074127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.137 [2024-11-19 11:00:05.074137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.137 [2024-11-19 11:00:05.074154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.137 [2024-11-19 11:00:05.086722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.137 [2024-11-19 11:00:05.087441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.137 [2024-11-19 11:00:05.087507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.137 [2024-11-19 11:00:05.087521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.137 [2024-11-19 11:00:05.087773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.137 [2024-11-19 11:00:05.087998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.137 [2024-11-19 11:00:05.088009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.137 [2024-11-19 11:00:05.088018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.137 [2024-11-19 11:00:05.088028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.137 [2024-11-19 11:00:05.100596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.137 [2024-11-19 11:00:05.101213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.137 [2024-11-19 11:00:05.101246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.137 [2024-11-19 11:00:05.101255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.137 [2024-11-19 11:00:05.101476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.137 [2024-11-19 11:00:05.101696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.137 [2024-11-19 11:00:05.101707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.137 [2024-11-19 11:00:05.101715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.137 [2024-11-19 11:00:05.101725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.137 [2024-11-19 11:00:05.114474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.137 [2024-11-19 11:00:05.115137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.137 [2024-11-19 11:00:05.115213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.137 [2024-11-19 11:00:05.115227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.137 [2024-11-19 11:00:05.115479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.137 [2024-11-19 11:00:05.115703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.137 [2024-11-19 11:00:05.115716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.137 [2024-11-19 11:00:05.115725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.137 [2024-11-19 11:00:05.115735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.138 [2024-11-19 11:00:05.128307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.138 [2024-11-19 11:00:05.128878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.138 [2024-11-19 11:00:05.128950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.138 [2024-11-19 11:00:05.128964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.138 [2024-11-19 11:00:05.129230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.138 [2024-11-19 11:00:05.129468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.138 [2024-11-19 11:00:05.129479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.138 [2024-11-19 11:00:05.129488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.138 [2024-11-19 11:00:05.129498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.138 [2024-11-19 11:00:05.142052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.138 [2024-11-19 11:00:05.142734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.138 [2024-11-19 11:00:05.142800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.138 [2024-11-19 11:00:05.142814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.138 [2024-11-19 11:00:05.143066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.138 [2024-11-19 11:00:05.143320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.138 [2024-11-19 11:00:05.143335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.138 [2024-11-19 11:00:05.143345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.138 [2024-11-19 11:00:05.143355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.138 [2024-11-19 11:00:05.155904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.138 [2024-11-19 11:00:05.156457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.138 [2024-11-19 11:00:05.156523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.138 [2024-11-19 11:00:05.156536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.138 [2024-11-19 11:00:05.156789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.138 [2024-11-19 11:00:05.157013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.138 [2024-11-19 11:00:05.157024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.138 [2024-11-19 11:00:05.157033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.138 [2024-11-19 11:00:05.157043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.138 [2024-11-19 11:00:05.169811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.138 [2024-11-19 11:00:05.170487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.138 [2024-11-19 11:00:05.170554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.138 [2024-11-19 11:00:05.170568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.138 [2024-11-19 11:00:05.170826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.138 [2024-11-19 11:00:05.171051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.138 [2024-11-19 11:00:05.171062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.138 [2024-11-19 11:00:05.171071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.138 [2024-11-19 11:00:05.171080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.138 [2024-11-19 11:00:05.183640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.138 [2024-11-19 11:00:05.184344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.138 [2024-11-19 11:00:05.184410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.138 [2024-11-19 11:00:05.184423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.138 [2024-11-19 11:00:05.184675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.138 [2024-11-19 11:00:05.184900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.138 [2024-11-19 11:00:05.184913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.138 [2024-11-19 11:00:05.184921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.138 [2024-11-19 11:00:05.184931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.138 [2024-11-19 11:00:05.197493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.138 [2024-11-19 11:00:05.198186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.138 [2024-11-19 11:00:05.198253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.138 [2024-11-19 11:00:05.198265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.138 [2024-11-19 11:00:05.198517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.138 [2024-11-19 11:00:05.198742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.138 [2024-11-19 11:00:05.198753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.138 [2024-11-19 11:00:05.198762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.138 [2024-11-19 11:00:05.198772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.138 [2024-11-19 11:00:05.211330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.138 [2024-11-19 11:00:05.211901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.138 [2024-11-19 11:00:05.211967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.138 [2024-11-19 11:00:05.211980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.138 [2024-11-19 11:00:05.212245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.138 [2024-11-19 11:00:05.212472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.138 [2024-11-19 11:00:05.212485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.138 [2024-11-19 11:00:05.212501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.138 [2024-11-19 11:00:05.212511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.138 [2024-11-19 11:00:05.225091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.138 [2024-11-19 11:00:05.225776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.138 [2024-11-19 11:00:05.225841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.138 [2024-11-19 11:00:05.225854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.138 [2024-11-19 11:00:05.226107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.138 [2024-11-19 11:00:05.226349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.138 [2024-11-19 11:00:05.226363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.138 [2024-11-19 11:00:05.226373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.139 [2024-11-19 11:00:05.226382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.139 [2024-11-19 11:00:05.238949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.139 [2024-11-19 11:00:05.239736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.139 [2024-11-19 11:00:05.239803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.139 [2024-11-19 11:00:05.239816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.139 [2024-11-19 11:00:05.240067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.139 [2024-11-19 11:00:05.240308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.139 [2024-11-19 11:00:05.240321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.139 [2024-11-19 11:00:05.240330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.139 [2024-11-19 11:00:05.240340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.139 [2024-11-19 11:00:05.252720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.139 [2024-11-19 11:00:05.253411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.139 [2024-11-19 11:00:05.253477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.139 [2024-11-19 11:00:05.253490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.139 [2024-11-19 11:00:05.253742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.139 [2024-11-19 11:00:05.253967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.139 [2024-11-19 11:00:05.253978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.139 [2024-11-19 11:00:05.253988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.139 [2024-11-19 11:00:05.253998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.139 [2024-11-19 11:00:05.266574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.139 [2024-11-19 11:00:05.267252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.139 [2024-11-19 11:00:05.267319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.139 [2024-11-19 11:00:05.267333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.139 [2024-11-19 11:00:05.267584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.139 [2024-11-19 11:00:05.267808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.139 [2024-11-19 11:00:05.267820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.139 [2024-11-19 11:00:05.267830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.139 [2024-11-19 11:00:05.267839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.139 [2024-11-19 11:00:05.280473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.139 [2024-11-19 11:00:05.280986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.139 [2024-11-19 11:00:05.281016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.139 [2024-11-19 11:00:05.281027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.139 [2024-11-19 11:00:05.281259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.139 [2024-11-19 11:00:05.281480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.139 [2024-11-19 11:00:05.281490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.139 [2024-11-19 11:00:05.281499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.139 [2024-11-19 11:00:05.281507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.139 [2024-11-19 11:00:05.294245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.139 [2024-11-19 11:00:05.294811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.139 [2024-11-19 11:00:05.294836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.139 [2024-11-19 11:00:05.294846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.139 [2024-11-19 11:00:05.295064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.139 [2024-11-19 11:00:05.295293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.139 [2024-11-19 11:00:05.295304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.139 [2024-11-19 11:00:05.295313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.139 [2024-11-19 11:00:05.295321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.139 [2024-11-19 11:00:05.308223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.139 [2024-11-19 11:00:05.308722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.139 [2024-11-19 11:00:05.308762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.139 [2024-11-19 11:00:05.308771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.139 [2024-11-19 11:00:05.308991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.139 [2024-11-19 11:00:05.309219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.139 [2024-11-19 11:00:05.309231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.139 [2024-11-19 11:00:05.309239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.139 [2024-11-19 11:00:05.309248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.139 [2024-11-19 11:00:05.321302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.139 [2024-11-19 11:00:05.321829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.139 [2024-11-19 11:00:05.321851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.139 [2024-11-19 11:00:05.321859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.139 [2024-11-19 11:00:05.322010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.139 [2024-11-19 11:00:05.322173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.139 [2024-11-19 11:00:05.322181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.139 [2024-11-19 11:00:05.322188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.139 [2024-11-19 11:00:05.322194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.401 [2024-11-19 11:00:05.333927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.401 [2024-11-19 11:00:05.334519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.401 [2024-11-19 11:00:05.334575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.401 [2024-11-19 11:00:05.334586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.401 [2024-11-19 11:00:05.334766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.401 [2024-11-19 11:00:05.334923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.401 [2024-11-19 11:00:05.334934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.402 [2024-11-19 11:00:05.334941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.402 [2024-11-19 11:00:05.334949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.402 [2024-11-19 11:00:05.346585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.402 [2024-11-19 11:00:05.347131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.402 [2024-11-19 11:00:05.347155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.402 [2024-11-19 11:00:05.347171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.402 [2024-11-19 11:00:05.347323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.402 [2024-11-19 11:00:05.347481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.402 [2024-11-19 11:00:05.347489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.402 [2024-11-19 11:00:05.347495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.402 [2024-11-19 11:00:05.347501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.402 [2024-11-19 11:00:05.359231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.402 [2024-11-19 11:00:05.359788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.402 [2024-11-19 11:00:05.359836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.402 [2024-11-19 11:00:05.359846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.402 [2024-11-19 11:00:05.360020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.402 [2024-11-19 11:00:05.360190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.402 [2024-11-19 11:00:05.360199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.402 [2024-11-19 11:00:05.360206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.402 [2024-11-19 11:00:05.360213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.402 [2024-11-19 11:00:05.371948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.402 [2024-11-19 11:00:05.372319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.402 [2024-11-19 11:00:05.372341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.402 [2024-11-19 11:00:05.372348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.402 [2024-11-19 11:00:05.372498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.402 [2024-11-19 11:00:05.372649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.402 [2024-11-19 11:00:05.372656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.402 [2024-11-19 11:00:05.372662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.402 [2024-11-19 11:00:05.372668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.402 [2024-11-19 11:00:05.384537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.402 [2024-11-19 11:00:05.385138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.402 [2024-11-19 11:00:05.385188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.402 [2024-11-19 11:00:05.385197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.402 [2024-11-19 11:00:05.385368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.402 [2024-11-19 11:00:05.385521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.402 [2024-11-19 11:00:05.385529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.402 [2024-11-19 11:00:05.385540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.402 [2024-11-19 11:00:05.385547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.402 [2024-11-19 11:00:05.397121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.402 [2024-11-19 11:00:05.397633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.402 [2024-11-19 11:00:05.397652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.402 [2024-11-19 11:00:05.397659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.402 [2024-11-19 11:00:05.397809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.402 [2024-11-19 11:00:05.397959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.402 [2024-11-19 11:00:05.397965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.402 [2024-11-19 11:00:05.397972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.402 [2024-11-19 11:00:05.397977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.402 [2024-11-19 11:00:05.409821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.402 [2024-11-19 11:00:05.410292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.402 [2024-11-19 11:00:05.410308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.402 [2024-11-19 11:00:05.410314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.402 [2024-11-19 11:00:05.410463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.402 [2024-11-19 11:00:05.410613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.402 [2024-11-19 11:00:05.410620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.402 [2024-11-19 11:00:05.410626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.402 [2024-11-19 11:00:05.410631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.402 [2024-11-19 11:00:05.422480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.402 [2024-11-19 11:00:05.422950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.402 [2024-11-19 11:00:05.422965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.402 [2024-11-19 11:00:05.422970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.402 [2024-11-19 11:00:05.423119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.402 [2024-11-19 11:00:05.423274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.402 [2024-11-19 11:00:05.423282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.402 [2024-11-19 11:00:05.423288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.402 [2024-11-19 11:00:05.423293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.402 [2024-11-19 11:00:05.435134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.402 [2024-11-19 11:00:05.435665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.402 [2024-11-19 11:00:05.435700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.402 [2024-11-19 11:00:05.435708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.402 [2024-11-19 11:00:05.435873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.402 [2024-11-19 11:00:05.436025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.402 [2024-11-19 11:00:05.436033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.402 [2024-11-19 11:00:05.436038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.402 [2024-11-19 11:00:05.436044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.402 [2024-11-19 11:00:05.447761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.402 [2024-11-19 11:00:05.448283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.402 [2024-11-19 11:00:05.448316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.402 [2024-11-19 11:00:05.448325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.402 [2024-11-19 11:00:05.448492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.402 [2024-11-19 11:00:05.448644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.402 [2024-11-19 11:00:05.448651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.402 [2024-11-19 11:00:05.448657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.403 [2024-11-19 11:00:05.448664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.403 [2024-11-19 11:00:05.460357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.403 [2024-11-19 11:00:05.460952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.403 [2024-11-19 11:00:05.460984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.403 [2024-11-19 11:00:05.460993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.403 [2024-11-19 11:00:05.461157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.403 [2024-11-19 11:00:05.461317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.403 [2024-11-19 11:00:05.461324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.403 [2024-11-19 11:00:05.461329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.403 [2024-11-19 11:00:05.461335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.403 [2024-11-19 11:00:05.473016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.403 [2024-11-19 11:00:05.473625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.403 [2024-11-19 11:00:05.473658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.403 [2024-11-19 11:00:05.473669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.403 [2024-11-19 11:00:05.473833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.403 [2024-11-19 11:00:05.473985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.403 [2024-11-19 11:00:05.473992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.403 [2024-11-19 11:00:05.473997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.403 [2024-11-19 11:00:05.474004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.403 [2024-11-19 11:00:05.485693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.403 [2024-11-19 11:00:05.486204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.403 [2024-11-19 11:00:05.486226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.403 [2024-11-19 11:00:05.486232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.403 [2024-11-19 11:00:05.486386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.403 [2024-11-19 11:00:05.486536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.403 [2024-11-19 11:00:05.486543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.403 [2024-11-19 11:00:05.486549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.403 [2024-11-19 11:00:05.486554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.403 [2024-11-19 11:00:05.498373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.403 [2024-11-19 11:00:05.498951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.403 [2024-11-19 11:00:05.498983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.403 [2024-11-19 11:00:05.498992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.403 [2024-11-19 11:00:05.499156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.403 [2024-11-19 11:00:05.499315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.403 [2024-11-19 11:00:05.499322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.403 [2024-11-19 11:00:05.499328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.403 [2024-11-19 11:00:05.499334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.403 [2024-11-19 11:00:05.511014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.403 [2024-11-19 11:00:05.511605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.403 [2024-11-19 11:00:05.511637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.403 [2024-11-19 11:00:05.511646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.403 [2024-11-19 11:00:05.511811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.403 [2024-11-19 11:00:05.511966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.403 [2024-11-19 11:00:05.511975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.403 [2024-11-19 11:00:05.511981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.403 [2024-11-19 11:00:05.511988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.403 [2024-11-19 11:00:05.523684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.403 [2024-11-19 11:00:05.524234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.403 [2024-11-19 11:00:05.524267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.403 [2024-11-19 11:00:05.524276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.403 [2024-11-19 11:00:05.524441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.403 [2024-11-19 11:00:05.524592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.403 [2024-11-19 11:00:05.524599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.403 [2024-11-19 11:00:05.524605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.403 [2024-11-19 11:00:05.524610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.403 [2024-11-19 11:00:05.536298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.403 [2024-11-19 11:00:05.536880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.403 [2024-11-19 11:00:05.536912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.403 [2024-11-19 11:00:05.536921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.403 [2024-11-19 11:00:05.537085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.403 [2024-11-19 11:00:05.537245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.403 [2024-11-19 11:00:05.537252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.403 [2024-11-19 11:00:05.537258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.403 [2024-11-19 11:00:05.537264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.403 [2024-11-19 11:00:05.548954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.403 [2024-11-19 11:00:05.549546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.403 [2024-11-19 11:00:05.549578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.403 [2024-11-19 11:00:05.549587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.403 [2024-11-19 11:00:05.549752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.403 [2024-11-19 11:00:05.549904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.403 [2024-11-19 11:00:05.549911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.403 [2024-11-19 11:00:05.549920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.403 [2024-11-19 11:00:05.549926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.403 [2024-11-19 11:00:05.561622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.403 [2024-11-19 11:00:05.562124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.403 [2024-11-19 11:00:05.562140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.403 [2024-11-19 11:00:05.562146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.403 [2024-11-19 11:00:05.562299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.403 [2024-11-19 11:00:05.562449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.403 [2024-11-19 11:00:05.562455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.403 [2024-11-19 11:00:05.562461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.403 [2024-11-19 11:00:05.562466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.403 [2024-11-19 11:00:05.574285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.403 [2024-11-19 11:00:05.574712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.403 [2024-11-19 11:00:05.574744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.403 [2024-11-19 11:00:05.574753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.403 [2024-11-19 11:00:05.574917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.403 [2024-11-19 11:00:05.575069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.404 [2024-11-19 11:00:05.575076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.404 [2024-11-19 11:00:05.575082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.404 [2024-11-19 11:00:05.575088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.404 [2024-11-19 11:00:05.586916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.404 [2024-11-19 11:00:05.587426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.404 [2024-11-19 11:00:05.587442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.404 [2024-11-19 11:00:05.587448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.404 [2024-11-19 11:00:05.587596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.404 [2024-11-19 11:00:05.587745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.404 [2024-11-19 11:00:05.587751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.404 [2024-11-19 11:00:05.587757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.404 [2024-11-19 11:00:05.587762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.665 [2024-11-19 11:00:05.599613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.665 [2024-11-19 11:00:05.600064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.665 [2024-11-19 11:00:05.600078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.665 [2024-11-19 11:00:05.600083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.665 [2024-11-19 11:00:05.600236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.665 [2024-11-19 11:00:05.600386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.665 [2024-11-19 11:00:05.600392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.665 [2024-11-19 11:00:05.600398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.665 [2024-11-19 11:00:05.600403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.665 [2024-11-19 11:00:05.612217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.665 [2024-11-19 11:00:05.612670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.665 [2024-11-19 11:00:05.612702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.665 [2024-11-19 11:00:05.612711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.665 [2024-11-19 11:00:05.612875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.665 [2024-11-19 11:00:05.613027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.665 [2024-11-19 11:00:05.613035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.665 [2024-11-19 11:00:05.613041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.665 [2024-11-19 11:00:05.613047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.665 [2024-11-19 11:00:05.624889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.665 [2024-11-19 11:00:05.625463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.665 [2024-11-19 11:00:05.625495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.665 [2024-11-19 11:00:05.625504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.665 [2024-11-19 11:00:05.625668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.665 [2024-11-19 11:00:05.625819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.665 [2024-11-19 11:00:05.625826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.665 [2024-11-19 11:00:05.625832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.666 [2024-11-19 11:00:05.625838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.666 [2024-11-19 11:00:05.637535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.666 [2024-11-19 11:00:05.638031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.666 [2024-11-19 11:00:05.638046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.666 [2024-11-19 11:00:05.638056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.666 [2024-11-19 11:00:05.638211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.666 [2024-11-19 11:00:05.638362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.666 [2024-11-19 11:00:05.638368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.666 [2024-11-19 11:00:05.638374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.666 [2024-11-19 11:00:05.638380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.666 7321.50 IOPS, 28.60 MiB/s [2024-11-19T10:00:05.861Z] [2024-11-19 11:00:05.650221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.666 [2024-11-19 11:00:05.650797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.666 [2024-11-19 11:00:05.650829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.666 [2024-11-19 11:00:05.650839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.666 [2024-11-19 11:00:05.651003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.666 [2024-11-19 11:00:05.651154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.666 [2024-11-19 11:00:05.651167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.666 [2024-11-19 11:00:05.651173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.666 [2024-11-19 11:00:05.651179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.666 [2024-11-19 11:00:05.662803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.666 [2024-11-19 11:00:05.663365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.666 [2024-11-19 11:00:05.663398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.666 [2024-11-19 11:00:05.663407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.666 [2024-11-19 11:00:05.663571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.666 [2024-11-19 11:00:05.663724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.666 [2024-11-19 11:00:05.663730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.666 [2024-11-19 11:00:05.663737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.666 [2024-11-19 11:00:05.663743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.666 [2024-11-19 11:00:05.675433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.666 [2024-11-19 11:00:05.676027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.666 [2024-11-19 11:00:05.676059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.666 [2024-11-19 11:00:05.676069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.666 [2024-11-19 11:00:05.676240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.666 [2024-11-19 11:00:05.676396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.666 [2024-11-19 11:00:05.676404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.666 [2024-11-19 11:00:05.676411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.666 [2024-11-19 11:00:05.676418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.666 [2024-11-19 11:00:05.688101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.666 [2024-11-19 11:00:05.688660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.666 [2024-11-19 11:00:05.688692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.666 [2024-11-19 11:00:05.688701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.666 [2024-11-19 11:00:05.688865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.666 [2024-11-19 11:00:05.689016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.666 [2024-11-19 11:00:05.689023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.666 [2024-11-19 11:00:05.689028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.666 [2024-11-19 11:00:05.689035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.666 [2024-11-19 11:00:05.700725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.666 [2024-11-19 11:00:05.701196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.666 [2024-11-19 11:00:05.701218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.666 [2024-11-19 11:00:05.701225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.666 [2024-11-19 11:00:05.701378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.666 [2024-11-19 11:00:05.701528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.666 [2024-11-19 11:00:05.701535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.666 [2024-11-19 11:00:05.701540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.666 [2024-11-19 11:00:05.701546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.666 [2024-11-19 11:00:05.713386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.666 [2024-11-19 11:00:05.713821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.666 [2024-11-19 11:00:05.713835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.666 [2024-11-19 11:00:05.713841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.666 [2024-11-19 11:00:05.713989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.666 [2024-11-19 11:00:05.714138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.666 [2024-11-19 11:00:05.714145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.666 [2024-11-19 11:00:05.714154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.666 [2024-11-19 11:00:05.714163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.666 [2024-11-19 11:00:05.725997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.666 [2024-11-19 11:00:05.726545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.666 [2024-11-19 11:00:05.726577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.666 [2024-11-19 11:00:05.726586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.666 [2024-11-19 11:00:05.726750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.666 [2024-11-19 11:00:05.726901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.666 [2024-11-19 11:00:05.726909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.666 [2024-11-19 11:00:05.726915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.666 [2024-11-19 11:00:05.726921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.666 [2024-11-19 11:00:05.738619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.666 [2024-11-19 11:00:05.739216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.666 [2024-11-19 11:00:05.739248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.666 [2024-11-19 11:00:05.739257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.666 [2024-11-19 11:00:05.739421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.666 [2024-11-19 11:00:05.739572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.666 [2024-11-19 11:00:05.739580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.666 [2024-11-19 11:00:05.739586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.666 [2024-11-19 11:00:05.739592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.666 [2024-11-19 11:00:05.751293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.666 [2024-11-19 11:00:05.751838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.666 [2024-11-19 11:00:05.751871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.666 [2024-11-19 11:00:05.751880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.666 [2024-11-19 11:00:05.752044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.667 [2024-11-19 11:00:05.752202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.667 [2024-11-19 11:00:05.752209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.667 [2024-11-19 11:00:05.752216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.667 [2024-11-19 11:00:05.752222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.667 [2024-11-19 11:00:05.763908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.667 [2024-11-19 11:00:05.764376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.667 [2024-11-19 11:00:05.764392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.667 [2024-11-19 11:00:05.764399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.667 [2024-11-19 11:00:05.764547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.667 [2024-11-19 11:00:05.764696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.667 [2024-11-19 11:00:05.764702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.667 [2024-11-19 11:00:05.764707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.667 [2024-11-19 11:00:05.764713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.667 [2024-11-19 11:00:05.776535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.667 [2024-11-19 11:00:05.776998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.667 [2024-11-19 11:00:05.777011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.667 [2024-11-19 11:00:05.777017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.667 [2024-11-19 11:00:05.777170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.667 [2024-11-19 11:00:05.777320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.667 [2024-11-19 11:00:05.777328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.667 [2024-11-19 11:00:05.777334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.667 [2024-11-19 11:00:05.777339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.667 [2024-11-19 11:00:05.789157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.667 [2024-11-19 11:00:05.789654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.667 [2024-11-19 11:00:05.789667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.667 [2024-11-19 11:00:05.789672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.667 [2024-11-19 11:00:05.789821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.667 [2024-11-19 11:00:05.789970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.667 [2024-11-19 11:00:05.789977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.667 [2024-11-19 11:00:05.789983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.667 [2024-11-19 11:00:05.789987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.667 [2024-11-19 11:00:05.801810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.667 [2024-11-19 11:00:05.802503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.667 [2024-11-19 11:00:05.802535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.667 [2024-11-19 11:00:05.802547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.667 [2024-11-19 11:00:05.802711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.667 [2024-11-19 11:00:05.802862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.667 [2024-11-19 11:00:05.802869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.667 [2024-11-19 11:00:05.802875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.667 [2024-11-19 11:00:05.802881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.667 [2024-11-19 11:00:05.814433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.667 [2024-11-19 11:00:05.814896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.667 [2024-11-19 11:00:05.814912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.667 [2024-11-19 11:00:05.814918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.667 [2024-11-19 11:00:05.815067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.667 [2024-11-19 11:00:05.815220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.667 [2024-11-19 11:00:05.815227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.667 [2024-11-19 11:00:05.815233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.667 [2024-11-19 11:00:05.815239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.667 [2024-11-19 11:00:05.827068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.667 [2024-11-19 11:00:05.827766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.667 [2024-11-19 11:00:05.827799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.667 [2024-11-19 11:00:05.827808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.667 [2024-11-19 11:00:05.827973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.667 [2024-11-19 11:00:05.828124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.667 [2024-11-19 11:00:05.828131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.667 [2024-11-19 11:00:05.828137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.667 [2024-11-19 11:00:05.828143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.667 [2024-11-19 11:00:05.839701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.667 [2024-11-19 11:00:05.840234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.667 [2024-11-19 11:00:05.840267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.667 [2024-11-19 11:00:05.840277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.667 [2024-11-19 11:00:05.840444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.667 [2024-11-19 11:00:05.840600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.667 [2024-11-19 11:00:05.840607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.667 [2024-11-19 11:00:05.840614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.667 [2024-11-19 11:00:05.840621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.667 [2024-11-19 11:00:05.852321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.667 [2024-11-19 11:00:05.852766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.667 [2024-11-19 11:00:05.852797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.667 [2024-11-19 11:00:05.852806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.667 [2024-11-19 11:00:05.852970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.667 [2024-11-19 11:00:05.853122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.667 [2024-11-19 11:00:05.853129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.667 [2024-11-19 11:00:05.853135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.667 [2024-11-19 11:00:05.853141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.930 [2024-11-19 11:00:05.864969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.930 [2024-11-19 11:00:05.865436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.930 [2024-11-19 11:00:05.865451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.930 [2024-11-19 11:00:05.865457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.930 [2024-11-19 11:00:05.865607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.930 [2024-11-19 11:00:05.865755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.930 [2024-11-19 11:00:05.865763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.930 [2024-11-19 11:00:05.865768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.930 [2024-11-19 11:00:05.865773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.930 [2024-11-19 11:00:05.877587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.930 [2024-11-19 11:00:05.878066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.930 [2024-11-19 11:00:05.878078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.930 [2024-11-19 11:00:05.878084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.930 [2024-11-19 11:00:05.878237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.930 [2024-11-19 11:00:05.878387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.930 [2024-11-19 11:00:05.878393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.930 [2024-11-19 11:00:05.878404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.930 [2024-11-19 11:00:05.878410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.930 [2024-11-19 11:00:05.890232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.930 [2024-11-19 11:00:05.890817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.930 [2024-11-19 11:00:05.890849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.930 [2024-11-19 11:00:05.890858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.930 [2024-11-19 11:00:05.891022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.930 [2024-11-19 11:00:05.891179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.930 [2024-11-19 11:00:05.891187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.930 [2024-11-19 11:00:05.891193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.930 [2024-11-19 11:00:05.891199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.930 [2024-11-19 11:00:05.902886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.930 [2024-11-19 11:00:05.903359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.930 [2024-11-19 11:00:05.903375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.930 [2024-11-19 11:00:05.903381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.930 [2024-11-19 11:00:05.903530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.930 [2024-11-19 11:00:05.903679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.930 [2024-11-19 11:00:05.903686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.930 [2024-11-19 11:00:05.903692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.930 [2024-11-19 11:00:05.903697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.930 [2024-11-19 11:00:05.915520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.930 [2024-11-19 11:00:05.916009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.930 [2024-11-19 11:00:05.916023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.930 [2024-11-19 11:00:05.916030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.930 [2024-11-19 11:00:05.916182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.930 [2024-11-19 11:00:05.916332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.930 [2024-11-19 11:00:05.916338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.930 [2024-11-19 11:00:05.916344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.930 [2024-11-19 11:00:05.916348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.930 [2024-11-19 11:00:05.928174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.930 [2024-11-19 11:00:05.928663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.930 [2024-11-19 11:00:05.928676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.930 [2024-11-19 11:00:05.928681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.930 [2024-11-19 11:00:05.928830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.930 [2024-11-19 11:00:05.928978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.930 [2024-11-19 11:00:05.928985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.930 [2024-11-19 11:00:05.928991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.930 [2024-11-19 11:00:05.928995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.930 [2024-11-19 11:00:05.940819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.930 [2024-11-19 11:00:05.941285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.930 [2024-11-19 11:00:05.941318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.930 [2024-11-19 11:00:05.941327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.930 [2024-11-19 11:00:05.941494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.930 [2024-11-19 11:00:05.941645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.930 [2024-11-19 11:00:05.941653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.930 [2024-11-19 11:00:05.941659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.930 [2024-11-19 11:00:05.941665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.930 [2024-11-19 11:00:05.953506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.930 [2024-11-19 11:00:05.953963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.930 [2024-11-19 11:00:05.953978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.930 [2024-11-19 11:00:05.953984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.930 [2024-11-19 11:00:05.954133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.930 [2024-11-19 11:00:05.954287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.930 [2024-11-19 11:00:05.954294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.930 [2024-11-19 11:00:05.954299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.930 [2024-11-19 11:00:05.954304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.930 [2024-11-19 11:00:05.966118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.931 [2024-11-19 11:00:05.966585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.931 [2024-11-19 11:00:05.966599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.931 [2024-11-19 11:00:05.966608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.931 [2024-11-19 11:00:05.966756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.931 [2024-11-19 11:00:05.966906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.931 [2024-11-19 11:00:05.966912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.931 [2024-11-19 11:00:05.966917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.931 [2024-11-19 11:00:05.966922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.931 [2024-11-19 11:00:05.978780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.931 [2024-11-19 11:00:05.979207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.931 [2024-11-19 11:00:05.979228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.931 [2024-11-19 11:00:05.979234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.931 [2024-11-19 11:00:05.979387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.931 [2024-11-19 11:00:05.979536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.931 [2024-11-19 11:00:05.979542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.931 [2024-11-19 11:00:05.979549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.931 [2024-11-19 11:00:05.979554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.931 [2024-11-19 11:00:05.991381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.931 [2024-11-19 11:00:05.991860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.931 [2024-11-19 11:00:05.991892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.931 [2024-11-19 11:00:05.991901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.931 [2024-11-19 11:00:05.992067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.931 [2024-11-19 11:00:05.992225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.931 [2024-11-19 11:00:05.992232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.931 [2024-11-19 11:00:05.992238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.931 [2024-11-19 11:00:05.992244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.931 [2024-11-19 11:00:06.003946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.931 [2024-11-19 11:00:06.004439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.931 [2024-11-19 11:00:06.004456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.931 [2024-11-19 11:00:06.004463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.931 [2024-11-19 11:00:06.004612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.931 [2024-11-19 11:00:06.004765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.931 [2024-11-19 11:00:06.004772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.931 [2024-11-19 11:00:06.004777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.931 [2024-11-19 11:00:06.004782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.931 [2024-11-19 11:00:06.016604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.931 [2024-11-19 11:00:06.017092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.931 [2024-11-19 11:00:06.017105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.931 [2024-11-19 11:00:06.017111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.931 [2024-11-19 11:00:06.017270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.931 [2024-11-19 11:00:06.017421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.931 [2024-11-19 11:00:06.017428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.931 [2024-11-19 11:00:06.017434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.931 [2024-11-19 11:00:06.017439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.931 [2024-11-19 11:00:06.029259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.931 [2024-11-19 11:00:06.029738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.931 [2024-11-19 11:00:06.029770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.931 [2024-11-19 11:00:06.029779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.931 [2024-11-19 11:00:06.029943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.931 [2024-11-19 11:00:06.030094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.931 [2024-11-19 11:00:06.030101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.931 [2024-11-19 11:00:06.030107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.931 [2024-11-19 11:00:06.030113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.931 [2024-11-19 11:00:06.041947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.931 [2024-11-19 11:00:06.042444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.931 [2024-11-19 11:00:06.042460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.931 [2024-11-19 11:00:06.042467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.931 [2024-11-19 11:00:06.042616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.931 [2024-11-19 11:00:06.042766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.931 [2024-11-19 11:00:06.042773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.931 [2024-11-19 11:00:06.042779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.931 [2024-11-19 11:00:06.042788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.931 [2024-11-19 11:00:06.054617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.931 [2024-11-19 11:00:06.055102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.931 [2024-11-19 11:00:06.055116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.931 [2024-11-19 11:00:06.055122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.931 [2024-11-19 11:00:06.055276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.931 [2024-11-19 11:00:06.055426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.931 [2024-11-19 11:00:06.055432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.931 [2024-11-19 11:00:06.055438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.931 [2024-11-19 11:00:06.055443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.931 [2024-11-19 11:00:06.067260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.931 [2024-11-19 11:00:06.067841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.931 [2024-11-19 11:00:06.067873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.931 [2024-11-19 11:00:06.067882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.931 [2024-11-19 11:00:06.068046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.931 [2024-11-19 11:00:06.068204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.931 [2024-11-19 11:00:06.068212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.931 [2024-11-19 11:00:06.068217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.931 [2024-11-19 11:00:06.068224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.931 [2024-11-19 11:00:06.079909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.931 [2024-11-19 11:00:06.080357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.931 [2024-11-19 11:00:06.080373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.931 [2024-11-19 11:00:06.080379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.931 [2024-11-19 11:00:06.080528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.931 [2024-11-19 11:00:06.080678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.931 [2024-11-19 11:00:06.080684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.931 [2024-11-19 11:00:06.080690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.931 [2024-11-19 11:00:06.080695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.932 [2024-11-19 11:00:06.092518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.932 [2024-11-19 11:00:06.093013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.932 [2024-11-19 11:00:06.093027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.932 [2024-11-19 11:00:06.093032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.932 [2024-11-19 11:00:06.093186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.932 [2024-11-19 11:00:06.093336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.932 [2024-11-19 11:00:06.093342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.932 [2024-11-19 11:00:06.093348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.932 [2024-11-19 11:00:06.093352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.932 [2024-11-19 11:00:06.105171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.932 [2024-11-19 11:00:06.105661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.932 [2024-11-19 11:00:06.105675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.932 [2024-11-19 11:00:06.105681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.932 [2024-11-19 11:00:06.105829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.932 [2024-11-19 11:00:06.105978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.932 [2024-11-19 11:00:06.105984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.932 [2024-11-19 11:00:06.105990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.932 [2024-11-19 11:00:06.105995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:26.932 [2024-11-19 11:00:06.117822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:26.932 [2024-11-19 11:00:06.118199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.932 [2024-11-19 11:00:06.118220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:26.932 [2024-11-19 11:00:06.118226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:26.932 [2024-11-19 11:00:06.118379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:26.932 [2024-11-19 11:00:06.118528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:26.932 [2024-11-19 11:00:06.118535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:26.932 [2024-11-19 11:00:06.118540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:26.932 [2024-11-19 11:00:06.118546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.194 [2024-11-19 11:00:06.130517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.194 [2024-11-19 11:00:06.131005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.195 [2024-11-19 11:00:06.131019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.195 [2024-11-19 11:00:06.131024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.195 [2024-11-19 11:00:06.131181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.195 [2024-11-19 11:00:06.131330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.195 [2024-11-19 11:00:06.131337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.195 [2024-11-19 11:00:06.131342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.195 [2024-11-19 11:00:06.131347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.195 [2024-11-19 11:00:06.143162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.195 [2024-11-19 11:00:06.143601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.195 [2024-11-19 11:00:06.143633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.195 [2024-11-19 11:00:06.143641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.195 [2024-11-19 11:00:06.143805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.195 [2024-11-19 11:00:06.143957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.195 [2024-11-19 11:00:06.143965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.195 [2024-11-19 11:00:06.143970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.195 [2024-11-19 11:00:06.143976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.195 [2024-11-19 11:00:06.155819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.195 [2024-11-19 11:00:06.156303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.195 [2024-11-19 11:00:06.156335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.195 [2024-11-19 11:00:06.156344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.195 [2024-11-19 11:00:06.156510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.195 [2024-11-19 11:00:06.156662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.195 [2024-11-19 11:00:06.156669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.195 [2024-11-19 11:00:06.156674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.195 [2024-11-19 11:00:06.156680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.195 [2024-11-19 11:00:06.168511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.195 [2024-11-19 11:00:06.169038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.195 [2024-11-19 11:00:06.169070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.195 [2024-11-19 11:00:06.169079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.195 [2024-11-19 11:00:06.169248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.195 [2024-11-19 11:00:06.169407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.195 [2024-11-19 11:00:06.169418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.195 [2024-11-19 11:00:06.169424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.195 [2024-11-19 11:00:06.169430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.195 [2024-11-19 11:00:06.181113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.195 [2024-11-19 11:00:06.181596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.195 [2024-11-19 11:00:06.181628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.195 [2024-11-19 11:00:06.181638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.195 [2024-11-19 11:00:06.181803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.195 [2024-11-19 11:00:06.181955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.195 [2024-11-19 11:00:06.181962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.195 [2024-11-19 11:00:06.181968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.195 [2024-11-19 11:00:06.181975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.195 [2024-11-19 11:00:06.193805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.195 [2024-11-19 11:00:06.194184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.195 [2024-11-19 11:00:06.194201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.195 [2024-11-19 11:00:06.194207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.195 [2024-11-19 11:00:06.194356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.195 [2024-11-19 11:00:06.194505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.195 [2024-11-19 11:00:06.194511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.195 [2024-11-19 11:00:06.194517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.195 [2024-11-19 11:00:06.194523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.195 [2024-11-19 11:00:06.206483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.195 [2024-11-19 11:00:06.206850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.195 [2024-11-19 11:00:06.206864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.195 [2024-11-19 11:00:06.206870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.195 [2024-11-19 11:00:06.207017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.195 [2024-11-19 11:00:06.207171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.195 [2024-11-19 11:00:06.207177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.195 [2024-11-19 11:00:06.207183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.195 [2024-11-19 11:00:06.207192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.195 [2024-11-19 11:00:06.219153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.195 [2024-11-19 11:00:06.219514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.195 [2024-11-19 11:00:06.219528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.195 [2024-11-19 11:00:06.219534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.195 [2024-11-19 11:00:06.219682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.195 [2024-11-19 11:00:06.219831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.195 [2024-11-19 11:00:06.219838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.195 [2024-11-19 11:00:06.219843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.195 [2024-11-19 11:00:06.219848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.195 [2024-11-19 11:00:06.231801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.195 [2024-11-19 11:00:06.232189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.195 [2024-11-19 11:00:06.232203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.195 [2024-11-19 11:00:06.232209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.195 [2024-11-19 11:00:06.232357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.195 [2024-11-19 11:00:06.232506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.195 [2024-11-19 11:00:06.232512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.195 [2024-11-19 11:00:06.232518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.195 [2024-11-19 11:00:06.232523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.195 [2024-11-19 11:00:06.244388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.195 [2024-11-19 11:00:06.244982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.195 [2024-11-19 11:00:06.245014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.195 [2024-11-19 11:00:06.245023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.195 [2024-11-19 11:00:06.245194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.195 [2024-11-19 11:00:06.245347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.196 [2024-11-19 11:00:06.245354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.196 [2024-11-19 11:00:06.245360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.196 [2024-11-19 11:00:06.245366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.196 [2024-11-19 11:00:06.257058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.196 [2024-11-19 11:00:06.257518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.196 [2024-11-19 11:00:06.257533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.196 [2024-11-19 11:00:06.257539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.196 [2024-11-19 11:00:06.257688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.196 [2024-11-19 11:00:06.257837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.196 [2024-11-19 11:00:06.257844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.196 [2024-11-19 11:00:06.257849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.196 [2024-11-19 11:00:06.257854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.196 [2024-11-19 11:00:06.269673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.196 [2024-11-19 11:00:06.270122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.196 [2024-11-19 11:00:06.270135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.196 [2024-11-19 11:00:06.270141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.196 [2024-11-19 11:00:06.270293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.196 [2024-11-19 11:00:06.270443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.196 [2024-11-19 11:00:06.270450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.196 [2024-11-19 11:00:06.270455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.196 [2024-11-19 11:00:06.270460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.196 [2024-11-19 11:00:06.282275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.196 [2024-11-19 11:00:06.282763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.196 [2024-11-19 11:00:06.282776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.196 [2024-11-19 11:00:06.282781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.196 [2024-11-19 11:00:06.282929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.196 [2024-11-19 11:00:06.283078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.196 [2024-11-19 11:00:06.283085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.196 [2024-11-19 11:00:06.283090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.196 [2024-11-19 11:00:06.283095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.196 [2024-11-19 11:00:06.294916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.196 [2024-11-19 11:00:06.295353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.196 [2024-11-19 11:00:06.295385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.196 [2024-11-19 11:00:06.295394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.196 [2024-11-19 11:00:06.295564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.196 [2024-11-19 11:00:06.295716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.196 [2024-11-19 11:00:06.295723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.196 [2024-11-19 11:00:06.295728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.196 [2024-11-19 11:00:06.295735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.196 [2024-11-19 11:00:06.307568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.196 [2024-11-19 11:00:06.307929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.196 [2024-11-19 11:00:06.307945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.196 [2024-11-19 11:00:06.307951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.196 [2024-11-19 11:00:06.308100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.196 [2024-11-19 11:00:06.308254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.196 [2024-11-19 11:00:06.308262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.196 [2024-11-19 11:00:06.308267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.196 [2024-11-19 11:00:06.308272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.196 [2024-11-19 11:00:06.320246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.196 [2024-11-19 11:00:06.320731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.196 [2024-11-19 11:00:06.320744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.196 [2024-11-19 11:00:06.320750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.196 [2024-11-19 11:00:06.320898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.196 [2024-11-19 11:00:06.321047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.196 [2024-11-19 11:00:06.321053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.196 [2024-11-19 11:00:06.321059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.196 [2024-11-19 11:00:06.321064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.196 [2024-11-19 11:00:06.332884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.196 [2024-11-19 11:00:06.333343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.196 [2024-11-19 11:00:06.333357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.196 [2024-11-19 11:00:06.333363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.196 [2024-11-19 11:00:06.333511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.196 [2024-11-19 11:00:06.333660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.196 [2024-11-19 11:00:06.333670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.196 [2024-11-19 11:00:06.333676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.196 [2024-11-19 11:00:06.333680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.196 [2024-11-19 11:00:06.345569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.196 [2024-11-19 11:00:06.346018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.196 [2024-11-19 11:00:06.346032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.196 [2024-11-19 11:00:06.346038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.196 [2024-11-19 11:00:06.346190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.196 [2024-11-19 11:00:06.346339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.196 [2024-11-19 11:00:06.346346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.196 [2024-11-19 11:00:06.346352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.196 [2024-11-19 11:00:06.346357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.196 [2024-11-19 11:00:06.358188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.196 [2024-11-19 11:00:06.358636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.196 [2024-11-19 11:00:06.358650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.196 [2024-11-19 11:00:06.358656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.196 [2024-11-19 11:00:06.358804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.196 [2024-11-19 11:00:06.358953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.196 [2024-11-19 11:00:06.358959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.196 [2024-11-19 11:00:06.358964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.196 [2024-11-19 11:00:06.358970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.196 [2024-11-19 11:00:06.370789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.196 [2024-11-19 11:00:06.371294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.196 [2024-11-19 11:00:06.371308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.196 [2024-11-19 11:00:06.371314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.196 [2024-11-19 11:00:06.371462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.197 [2024-11-19 11:00:06.371611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.197 [2024-11-19 11:00:06.371617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.197 [2024-11-19 11:00:06.371623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.197 [2024-11-19 11:00:06.371632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.197 [2024-11-19 11:00:06.383445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.197 [2024-11-19 11:00:06.384032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.197 [2024-11-19 11:00:06.384064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.197 [2024-11-19 11:00:06.384073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.197 [2024-11-19 11:00:06.384242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.197 [2024-11-19 11:00:06.384395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.197 [2024-11-19 11:00:06.384402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.197 [2024-11-19 11:00:06.384408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.197 [2024-11-19 11:00:06.384414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.459 [2024-11-19 11:00:06.396094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.459 [2024-11-19 11:00:06.396712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.459 [2024-11-19 11:00:06.396744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.459 [2024-11-19 11:00:06.396753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.459 [2024-11-19 11:00:06.396917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.459 [2024-11-19 11:00:06.397069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.459 [2024-11-19 11:00:06.397076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.459 [2024-11-19 11:00:06.397082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.459 [2024-11-19 11:00:06.397088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.459 [2024-11-19 11:00:06.408776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.459 [2024-11-19 11:00:06.409298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.459 [2024-11-19 11:00:06.409330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.459 [2024-11-19 11:00:06.409339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.459 [2024-11-19 11:00:06.409505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.459 [2024-11-19 11:00:06.409657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.459 [2024-11-19 11:00:06.409664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.459 [2024-11-19 11:00:06.409670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.459 [2024-11-19 11:00:06.409676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.459 [2024-11-19 11:00:06.421374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.459 [2024-11-19 11:00:06.421946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.459 [2024-11-19 11:00:06.421982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.459 [2024-11-19 11:00:06.421991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.459 [2024-11-19 11:00:06.422155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.459 [2024-11-19 11:00:06.422314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.459 [2024-11-19 11:00:06.422322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.459 [2024-11-19 11:00:06.422328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.459 [2024-11-19 11:00:06.422334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.459 [2024-11-19 11:00:06.434010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.459 [2024-11-19 11:00:06.434608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.459 [2024-11-19 11:00:06.434641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.459 [2024-11-19 11:00:06.434649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.459 [2024-11-19 11:00:06.434814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.459 [2024-11-19 11:00:06.434965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.459 [2024-11-19 11:00:06.434972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.459 [2024-11-19 11:00:06.434978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.459 [2024-11-19 11:00:06.434984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.459 [2024-11-19 11:00:06.446671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.459 [2024-11-19 11:00:06.447249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.459 [2024-11-19 11:00:06.447281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.459 [2024-11-19 11:00:06.447290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.459 [2024-11-19 11:00:06.447455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.459 [2024-11-19 11:00:06.447615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.459 [2024-11-19 11:00:06.447623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.459 [2024-11-19 11:00:06.447628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.459 [2024-11-19 11:00:06.447635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.459 [2024-11-19 11:00:06.459324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.459 [2024-11-19 11:00:06.459793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.459 [2024-11-19 11:00:06.459825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.459 [2024-11-19 11:00:06.459834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.459 [2024-11-19 11:00:06.460001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.459 [2024-11-19 11:00:06.460153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.459 [2024-11-19 11:00:06.460168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.459 [2024-11-19 11:00:06.460174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.459 [2024-11-19 11:00:06.460180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.459 [2024-11-19 11:00:06.471998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.459 [2024-11-19 11:00:06.472548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.459 [2024-11-19 11:00:06.472580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.459 [2024-11-19 11:00:06.472588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.460 [2024-11-19 11:00:06.472752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.460 [2024-11-19 11:00:06.472903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.460 [2024-11-19 11:00:06.472910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.460 [2024-11-19 11:00:06.472916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.460 [2024-11-19 11:00:06.472922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.460 [2024-11-19 11:00:06.484612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.460 [2024-11-19 11:00:06.485116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.460 [2024-11-19 11:00:06.485132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.460 [2024-11-19 11:00:06.485138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.460 [2024-11-19 11:00:06.485292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.460 [2024-11-19 11:00:06.485441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.460 [2024-11-19 11:00:06.485448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.460 [2024-11-19 11:00:06.485453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.460 [2024-11-19 11:00:06.485458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.460 [2024-11-19 11:00:06.497267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.460 [2024-11-19 11:00:06.497744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.460 [2024-11-19 11:00:06.497758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.460 [2024-11-19 11:00:06.497763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.460 [2024-11-19 11:00:06.497912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.460 [2024-11-19 11:00:06.498061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.460 [2024-11-19 11:00:06.498071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.460 [2024-11-19 11:00:06.498076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.460 [2024-11-19 11:00:06.498081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.460 [2024-11-19 11:00:06.509899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.460 [2024-11-19 11:00:06.510449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.460 [2024-11-19 11:00:06.510482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.460 [2024-11-19 11:00:06.510490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.460 [2024-11-19 11:00:06.510654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.460 [2024-11-19 11:00:06.510806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.460 [2024-11-19 11:00:06.510813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.460 [2024-11-19 11:00:06.510818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.460 [2024-11-19 11:00:06.510824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.460 [2024-11-19 11:00:06.522515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.460 [2024-11-19 11:00:06.523107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.460 [2024-11-19 11:00:06.523139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.460 [2024-11-19 11:00:06.523148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.460 [2024-11-19 11:00:06.523319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.460 [2024-11-19 11:00:06.523471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.460 [2024-11-19 11:00:06.523479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.460 [2024-11-19 11:00:06.523484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.460 [2024-11-19 11:00:06.523490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.460 [2024-11-19 11:00:06.535168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.460 [2024-11-19 11:00:06.535741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.460 [2024-11-19 11:00:06.535773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.460 [2024-11-19 11:00:06.535782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.460 [2024-11-19 11:00:06.535946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.460 [2024-11-19 11:00:06.536098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.460 [2024-11-19 11:00:06.536104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.460 [2024-11-19 11:00:06.536110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.460 [2024-11-19 11:00:06.536116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.460 [2024-11-19 11:00:06.547809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.460 [2024-11-19 11:00:06.548398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.460 [2024-11-19 11:00:06.548431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.460 [2024-11-19 11:00:06.548440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.460 [2024-11-19 11:00:06.548604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.460 [2024-11-19 11:00:06.548757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.460 [2024-11-19 11:00:06.548764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.460 [2024-11-19 11:00:06.548769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.460 [2024-11-19 11:00:06.548775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.460 [2024-11-19 11:00:06.560467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.460 [2024-11-19 11:00:06.560942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.460 [2024-11-19 11:00:06.560974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.460 [2024-11-19 11:00:06.560983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.460 [2024-11-19 11:00:06.561147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.460 [2024-11-19 11:00:06.561306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.460 [2024-11-19 11:00:06.561314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.460 [2024-11-19 11:00:06.561320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.460 [2024-11-19 11:00:06.561326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.460 [2024-11-19 11:00:06.573156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.460 [2024-11-19 11:00:06.573756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.460 [2024-11-19 11:00:06.573788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.460 [2024-11-19 11:00:06.573798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.460 [2024-11-19 11:00:06.573965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.460 [2024-11-19 11:00:06.574116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.460 [2024-11-19 11:00:06.574124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.460 [2024-11-19 11:00:06.574129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.460 [2024-11-19 11:00:06.574135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.460 [2024-11-19 11:00:06.585823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.460 [2024-11-19 11:00:06.586295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.460 [2024-11-19 11:00:06.586316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.460 [2024-11-19 11:00:06.586322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.460 [2024-11-19 11:00:06.586471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.460 [2024-11-19 11:00:06.586620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.460 [2024-11-19 11:00:06.586627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.460 [2024-11-19 11:00:06.586633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.460 [2024-11-19 11:00:06.586638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.460 [2024-11-19 11:00:06.598457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.460 [2024-11-19 11:00:06.598829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.460 [2024-11-19 11:00:06.598842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.461 [2024-11-19 11:00:06.598848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.461 [2024-11-19 11:00:06.598996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.461 [2024-11-19 11:00:06.599145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.461 [2024-11-19 11:00:06.599152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.461 [2024-11-19 11:00:06.599163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.461 [2024-11-19 11:00:06.599168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.461 [2024-11-19 11:00:06.611121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.461 [2024-11-19 11:00:06.611713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.461 [2024-11-19 11:00:06.611745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.461 [2024-11-19 11:00:06.611754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.461 [2024-11-19 11:00:06.611918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.461 [2024-11-19 11:00:06.612070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.461 [2024-11-19 11:00:06.612077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.461 [2024-11-19 11:00:06.612083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.461 [2024-11-19 11:00:06.612089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.461 [2024-11-19 11:00:06.623796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.461 [2024-11-19 11:00:06.624426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.461 [2024-11-19 11:00:06.624459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.461 [2024-11-19 11:00:06.624467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.461 [2024-11-19 11:00:06.624635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.461 [2024-11-19 11:00:06.624787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.461 [2024-11-19 11:00:06.624794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.461 [2024-11-19 11:00:06.624800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.461 [2024-11-19 11:00:06.624807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.461 [2024-11-19 11:00:06.636491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.461 [2024-11-19 11:00:06.637083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.461 [2024-11-19 11:00:06.637115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.461 [2024-11-19 11:00:06.637124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.461 [2024-11-19 11:00:06.637297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.461 [2024-11-19 11:00:06.637450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.461 [2024-11-19 11:00:06.637457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.461 [2024-11-19 11:00:06.637462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.461 [2024-11-19 11:00:06.637468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.461 5857.20 IOPS, 22.88 MiB/s [2024-11-19T10:00:06.656Z] [2024-11-19 11:00:06.649172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.461 [2024-11-19 11:00:06.649623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.461 [2024-11-19 11:00:06.649654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.461 [2024-11-19 11:00:06.649663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.461 [2024-11-19 11:00:06.649827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.461 [2024-11-19 11:00:06.649979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.461 [2024-11-19 11:00:06.649986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.461 [2024-11-19 11:00:06.649992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.461 [2024-11-19 11:00:06.649998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.722 [2024-11-19 11:00:06.661830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.722 [2024-11-19 11:00:06.662413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.722 [2024-11-19 11:00:06.662446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.722 [2024-11-19 11:00:06.662455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.722 [2024-11-19 11:00:06.662619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.722 [2024-11-19 11:00:06.662770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.722 [2024-11-19 11:00:06.662781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.722 [2024-11-19 11:00:06.662786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.722 [2024-11-19 11:00:06.662792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.722 [2024-11-19 11:00:06.674484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.722 [2024-11-19 11:00:06.675056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.722 [2024-11-19 11:00:06.675088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.722 [2024-11-19 11:00:06.675097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.722 [2024-11-19 11:00:06.675269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.722 [2024-11-19 11:00:06.675421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.722 [2024-11-19 11:00:06.675429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.722 [2024-11-19 11:00:06.675435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.722 [2024-11-19 11:00:06.675440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.722 [2024-11-19 11:00:06.687056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.722 [2024-11-19 11:00:06.687659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.722 [2024-11-19 11:00:06.687692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.722 [2024-11-19 11:00:06.687700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.722 [2024-11-19 11:00:06.687864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.722 [2024-11-19 11:00:06.688016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.722 [2024-11-19 11:00:06.688023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.722 [2024-11-19 11:00:06.688029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.722 [2024-11-19 11:00:06.688034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.722 [2024-11-19 11:00:06.699723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.722 [2024-11-19 11:00:06.700260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.722 [2024-11-19 11:00:06.700292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.722 [2024-11-19 11:00:06.700301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.722 [2024-11-19 11:00:06.700466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.722 [2024-11-19 11:00:06.700618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.722 [2024-11-19 11:00:06.700625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.722 [2024-11-19 11:00:06.700631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.722 [2024-11-19 11:00:06.700637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.722 [2024-11-19 11:00:06.712322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.722 [2024-11-19 11:00:06.712880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.722 [2024-11-19 11:00:06.712912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.722 [2024-11-19 11:00:06.712921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.722 [2024-11-19 11:00:06.713085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.722 [2024-11-19 11:00:06.713244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.722 [2024-11-19 11:00:06.713251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.722 [2024-11-19 11:00:06.713258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.722 [2024-11-19 11:00:06.713264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.722 [2024-11-19 11:00:06.724945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.722 [2024-11-19 11:00:06.725341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.722 [2024-11-19 11:00:06.725373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.722 [2024-11-19 11:00:06.725382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.722 [2024-11-19 11:00:06.725549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.722 [2024-11-19 11:00:06.725700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.723 [2024-11-19 11:00:06.725707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.723 [2024-11-19 11:00:06.725713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.723 [2024-11-19 11:00:06.725720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.723 [2024-11-19 11:00:06.737545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.723 [2024-11-19 11:00:06.738136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.723 [2024-11-19 11:00:06.738173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.723 [2024-11-19 11:00:06.738183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.723 [2024-11-19 11:00:06.738350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.723 [2024-11-19 11:00:06.738501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.723 [2024-11-19 11:00:06.738508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.723 [2024-11-19 11:00:06.738514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.723 [2024-11-19 11:00:06.738521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.723 [2024-11-19 11:00:06.750208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.723 [2024-11-19 11:00:06.750799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.723 [2024-11-19 11:00:06.750834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.723 [2024-11-19 11:00:06.750843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.723 [2024-11-19 11:00:06.751007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.723 [2024-11-19 11:00:06.751167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.723 [2024-11-19 11:00:06.751175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.723 [2024-11-19 11:00:06.751180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.723 [2024-11-19 11:00:06.751186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.723 [2024-11-19 11:00:06.762864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.723 [2024-11-19 11:00:06.763467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.723 [2024-11-19 11:00:06.763499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.723 [2024-11-19 11:00:06.763508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.723 [2024-11-19 11:00:06.763672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.723 [2024-11-19 11:00:06.763823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.723 [2024-11-19 11:00:06.763830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.723 [2024-11-19 11:00:06.763836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.723 [2024-11-19 11:00:06.763842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.723 [2024-11-19 11:00:06.775533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.723 [2024-11-19 11:00:06.776118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.723 [2024-11-19 11:00:06.776150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.723 [2024-11-19 11:00:06.776166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.723 [2024-11-19 11:00:06.776332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.723 [2024-11-19 11:00:06.776484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.723 [2024-11-19 11:00:06.776491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.723 [2024-11-19 11:00:06.776497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.723 [2024-11-19 11:00:06.776504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.723 [2024-11-19 11:00:06.788179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.723 [2024-11-19 11:00:06.788775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.723 [2024-11-19 11:00:06.788807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.723 [2024-11-19 11:00:06.788816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.723 [2024-11-19 11:00:06.788983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.723 [2024-11-19 11:00:06.789135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.723 [2024-11-19 11:00:06.789142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.723 [2024-11-19 11:00:06.789148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.723 [2024-11-19 11:00:06.789154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.723 [2024-11-19 11:00:06.800839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.723 [2024-11-19 11:00:06.801449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.723 [2024-11-19 11:00:06.801482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.723 [2024-11-19 11:00:06.801490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.723 [2024-11-19 11:00:06.801654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.723 [2024-11-19 11:00:06.801807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.723 [2024-11-19 11:00:06.801814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.723 [2024-11-19 11:00:06.801820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.723 [2024-11-19 11:00:06.801826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.723 [2024-11-19 11:00:06.813513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.723 [2024-11-19 11:00:06.814046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.723 [2024-11-19 11:00:06.814078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.723 [2024-11-19 11:00:06.814087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.723 [2024-11-19 11:00:06.814260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.723 [2024-11-19 11:00:06.814412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.723 [2024-11-19 11:00:06.814419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.723 [2024-11-19 11:00:06.814425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.723 [2024-11-19 11:00:06.814432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.723 [2024-11-19 11:00:06.826121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.723 [2024-11-19 11:00:06.826677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.723 [2024-11-19 11:00:06.826709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.723 [2024-11-19 11:00:06.826717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.723 [2024-11-19 11:00:06.826881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.723 [2024-11-19 11:00:06.827033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.723 [2024-11-19 11:00:06.827040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.723 [2024-11-19 11:00:06.827049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.723 [2024-11-19 11:00:06.827055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.723 [2024-11-19 11:00:06.838739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.723 [2024-11-19 11:00:06.839239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.723 [2024-11-19 11:00:06.839271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.723 [2024-11-19 11:00:06.839280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.723 [2024-11-19 11:00:06.839447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.723 [2024-11-19 11:00:06.839599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.723 [2024-11-19 11:00:06.839605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.723 [2024-11-19 11:00:06.839611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.723 [2024-11-19 11:00:06.839617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.723 [2024-11-19 11:00:06.851309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.723 [2024-11-19 11:00:06.851902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.723 [2024-11-19 11:00:06.851934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.723 [2024-11-19 11:00:06.851943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.723 [2024-11-19 11:00:06.852107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.724 [2024-11-19 11:00:06.852267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.724 [2024-11-19 11:00:06.852275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.724 [2024-11-19 11:00:06.852281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.724 [2024-11-19 11:00:06.852287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.724 [2024-11-19 11:00:06.863961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.724 [2024-11-19 11:00:06.864510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.724 [2024-11-19 11:00:06.864543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.724 [2024-11-19 11:00:06.864551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.724 [2024-11-19 11:00:06.864715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.724 [2024-11-19 11:00:06.864867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.724 [2024-11-19 11:00:06.864874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.724 [2024-11-19 11:00:06.864880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.724 [2024-11-19 11:00:06.864886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.724 [2024-11-19 11:00:06.876578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.724 [2024-11-19 11:00:06.877167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.724 [2024-11-19 11:00:06.877200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.724 [2024-11-19 11:00:06.877208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.724 [2024-11-19 11:00:06.877372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.724 [2024-11-19 11:00:06.877524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.724 [2024-11-19 11:00:06.877531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.724 [2024-11-19 11:00:06.877537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.724 [2024-11-19 11:00:06.877543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.724 [2024-11-19 11:00:06.889225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.724 [2024-11-19 11:00:06.889797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.724 [2024-11-19 11:00:06.889828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.724 [2024-11-19 11:00:06.889837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.724 [2024-11-19 11:00:06.890002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.724 [2024-11-19 11:00:06.890153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.724 [2024-11-19 11:00:06.890167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.724 [2024-11-19 11:00:06.890174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.724 [2024-11-19 11:00:06.890180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.724 [2024-11-19 11:00:06.901857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.724 [2024-11-19 11:00:06.902442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.724 [2024-11-19 11:00:06.902474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.724 [2024-11-19 11:00:06.902483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.724 [2024-11-19 11:00:06.902649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.724 [2024-11-19 11:00:06.902801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.724 [2024-11-19 11:00:06.902807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.724 [2024-11-19 11:00:06.902813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.724 [2024-11-19 11:00:06.902819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.724 [2024-11-19 11:00:06.914512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.724 [2024-11-19 11:00:06.915108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.724 [2024-11-19 11:00:06.915140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.724 [2024-11-19 11:00:06.915152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.724 [2024-11-19 11:00:06.915327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.724 [2024-11-19 11:00:06.915480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.724 [2024-11-19 11:00:06.915487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.724 [2024-11-19 11:00:06.915492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.724 [2024-11-19 11:00:06.915498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.985 [2024-11-19 11:00:06.927204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.985 [2024-11-19 11:00:06.927751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.985 [2024-11-19 11:00:06.927783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.985 [2024-11-19 11:00:06.927792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.985 [2024-11-19 11:00:06.927956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.985 [2024-11-19 11:00:06.928108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.985 [2024-11-19 11:00:06.928114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.985 [2024-11-19 11:00:06.928120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.985 [2024-11-19 11:00:06.928126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.985 [2024-11-19 11:00:06.939828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.985 [2024-11-19 11:00:06.940175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.985 [2024-11-19 11:00:06.940193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.985 [2024-11-19 11:00:06.940199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.985 [2024-11-19 11:00:06.940349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.985 [2024-11-19 11:00:06.940499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.985 [2024-11-19 11:00:06.940505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.985 [2024-11-19 11:00:06.940510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.985 [2024-11-19 11:00:06.940516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.985 [2024-11-19 11:00:06.952501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.985 [2024-11-19 11:00:06.952987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.985 [2024-11-19 11:00:06.953002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.985 [2024-11-19 11:00:06.953008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.985 [2024-11-19 11:00:06.953156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.985 [2024-11-19 11:00:06.953316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.985 [2024-11-19 11:00:06.953323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.985 [2024-11-19 11:00:06.953329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.985 [2024-11-19 11:00:06.953334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.985 [2024-11-19 11:00:06.965164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.985 [2024-11-19 11:00:06.965757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.985 [2024-11-19 11:00:06.965789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.985 [2024-11-19 11:00:06.965799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.985 [2024-11-19 11:00:06.965962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.985 [2024-11-19 11:00:06.966114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.985 [2024-11-19 11:00:06.966122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.985 [2024-11-19 11:00:06.966127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.985 [2024-11-19 11:00:06.966133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.985 [2024-11-19 11:00:06.977818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.985 [2024-11-19 11:00:06.978376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.985 [2024-11-19 11:00:06.978409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.985 [2024-11-19 11:00:06.978418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.985 [2024-11-19 11:00:06.978582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.985 [2024-11-19 11:00:06.978734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.985 [2024-11-19 11:00:06.978740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.985 [2024-11-19 11:00:06.978746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.985 [2024-11-19 11:00:06.978752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.985 [2024-11-19 11:00:06.990437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.985 [2024-11-19 11:00:06.990927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.986 [2024-11-19 11:00:06.990944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.986 [2024-11-19 11:00:06.990950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.986 [2024-11-19 11:00:06.991099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.986 [2024-11-19 11:00:06.991254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.986 [2024-11-19 11:00:06.991261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.986 [2024-11-19 11:00:06.991271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.986 [2024-11-19 11:00:06.991277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.986 [2024-11-19 11:00:07.003096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.986 [2024-11-19 11:00:07.003661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.986 [2024-11-19 11:00:07.003693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.986 [2024-11-19 11:00:07.003702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.986 [2024-11-19 11:00:07.003866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.986 [2024-11-19 11:00:07.004018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.986 [2024-11-19 11:00:07.004025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.986 [2024-11-19 11:00:07.004031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.986 [2024-11-19 11:00:07.004037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.986 [2024-11-19 11:00:07.015729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.986 [2024-11-19 11:00:07.016365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.986 [2024-11-19 11:00:07.016396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.986 [2024-11-19 11:00:07.016406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.986 [2024-11-19 11:00:07.016570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.986 [2024-11-19 11:00:07.016721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.986 [2024-11-19 11:00:07.016728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.986 [2024-11-19 11:00:07.016733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.986 [2024-11-19 11:00:07.016740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.986 [2024-11-19 11:00:07.028303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.986 [2024-11-19 11:00:07.028901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.986 [2024-11-19 11:00:07.028933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.986 [2024-11-19 11:00:07.028942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.986 [2024-11-19 11:00:07.029106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.986 [2024-11-19 11:00:07.029266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.986 [2024-11-19 11:00:07.029275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.986 [2024-11-19 11:00:07.029280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.986 [2024-11-19 11:00:07.029286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.986 [2024-11-19 11:00:07.040977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.986 [2024-11-19 11:00:07.041571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.986 [2024-11-19 11:00:07.041603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.986 [2024-11-19 11:00:07.041612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.986 [2024-11-19 11:00:07.041776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.986 [2024-11-19 11:00:07.041928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.986 [2024-11-19 11:00:07.041935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.986 [2024-11-19 11:00:07.041940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.986 [2024-11-19 11:00:07.041946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.986 [2024-11-19 11:00:07.053651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.986 [2024-11-19 11:00:07.054139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.986 [2024-11-19 11:00:07.054177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.986 [2024-11-19 11:00:07.054187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.986 [2024-11-19 11:00:07.054354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.986 [2024-11-19 11:00:07.054505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.986 [2024-11-19 11:00:07.054513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.986 [2024-11-19 11:00:07.054518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.986 [2024-11-19 11:00:07.054524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.986 [2024-11-19 11:00:07.066347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.986 [2024-11-19 11:00:07.066886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.986 [2024-11-19 11:00:07.066918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.986 [2024-11-19 11:00:07.066927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.986 [2024-11-19 11:00:07.067092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.986 [2024-11-19 11:00:07.067254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.986 [2024-11-19 11:00:07.067262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.986 [2024-11-19 11:00:07.067268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.986 [2024-11-19 11:00:07.067274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.986 [2024-11-19 11:00:07.078966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.986 [2024-11-19 11:00:07.079574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.986 [2024-11-19 11:00:07.079607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.986 [2024-11-19 11:00:07.079619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.986 [2024-11-19 11:00:07.079783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.986 [2024-11-19 11:00:07.079934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.986 [2024-11-19 11:00:07.079941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.986 [2024-11-19 11:00:07.079947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.986 [2024-11-19 11:00:07.079953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.986 [2024-11-19 11:00:07.091636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.986 [2024-11-19 11:00:07.092242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.986 [2024-11-19 11:00:07.092274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.986 [2024-11-19 11:00:07.092283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.986 [2024-11-19 11:00:07.092449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.986 [2024-11-19 11:00:07.092600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.986 [2024-11-19 11:00:07.092607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.986 [2024-11-19 11:00:07.092613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.986 [2024-11-19 11:00:07.092619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.986 [2024-11-19 11:00:07.104302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.986 [2024-11-19 11:00:07.104739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.986 [2024-11-19 11:00:07.104770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.986 [2024-11-19 11:00:07.104779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.986 [2024-11-19 11:00:07.104944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.986 [2024-11-19 11:00:07.105095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.986 [2024-11-19 11:00:07.105102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.986 [2024-11-19 11:00:07.105108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.986 [2024-11-19 11:00:07.105114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.986 [2024-11-19 11:00:07.116939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.987 [2024-11-19 11:00:07.117491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.987 [2024-11-19 11:00:07.117524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.987 [2024-11-19 11:00:07.117532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.987 [2024-11-19 11:00:07.117696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.987 [2024-11-19 11:00:07.117851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.987 [2024-11-19 11:00:07.117859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.987 [2024-11-19 11:00:07.117864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.987 [2024-11-19 11:00:07.117870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.987 [2024-11-19 11:00:07.129568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.987 [2024-11-19 11:00:07.130168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.987 [2024-11-19 11:00:07.130199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.987 [2024-11-19 11:00:07.130208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.987 [2024-11-19 11:00:07.130372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.987 [2024-11-19 11:00:07.130524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.987 [2024-11-19 11:00:07.130531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.987 [2024-11-19 11:00:07.130537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.987 [2024-11-19 11:00:07.130543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.987 [2024-11-19 11:00:07.142227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.987 [2024-11-19 11:00:07.142800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.987 [2024-11-19 11:00:07.142832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.987 [2024-11-19 11:00:07.142841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.987 [2024-11-19 11:00:07.143004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.987 [2024-11-19 11:00:07.143156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.987 [2024-11-19 11:00:07.143172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.987 [2024-11-19 11:00:07.143178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.987 [2024-11-19 11:00:07.143184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.987 [2024-11-19 11:00:07.154870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.987 [2024-11-19 11:00:07.155458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.987 [2024-11-19 11:00:07.155491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.987 [2024-11-19 11:00:07.155499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.987 [2024-11-19 11:00:07.155663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.987 [2024-11-19 11:00:07.155815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.987 [2024-11-19 11:00:07.155822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.987 [2024-11-19 11:00:07.155834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.987 [2024-11-19 11:00:07.155840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.987 [2024-11-19 11:00:07.167524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.987 [2024-11-19 11:00:07.168097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.987 [2024-11-19 11:00:07.168129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:27.987 [2024-11-19 11:00:07.168138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:27.987 [2024-11-19 11:00:07.168310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:27.987 [2024-11-19 11:00:07.168463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.987 [2024-11-19 11:00:07.168470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.987 [2024-11-19 11:00:07.168476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.987 [2024-11-19 11:00:07.168481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.250 [2024-11-19 11:00:07.180173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.250 [2024-11-19 11:00:07.180768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.250 [2024-11-19 11:00:07.180800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.250 [2024-11-19 11:00:07.180809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.250 [2024-11-19 11:00:07.180973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.250 [2024-11-19 11:00:07.181124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.250 [2024-11-19 11:00:07.181131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.250 [2024-11-19 11:00:07.181137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.250 [2024-11-19 11:00:07.181143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.250 [2024-11-19 11:00:07.192838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.250 [2024-11-19 11:00:07.193411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.250 [2024-11-19 11:00:07.193443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.250 [2024-11-19 11:00:07.193452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.250 [2024-11-19 11:00:07.193616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.250 [2024-11-19 11:00:07.193768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.250 [2024-11-19 11:00:07.193775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.250 [2024-11-19 11:00:07.193781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.250 [2024-11-19 11:00:07.193787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.250 [2024-11-19 11:00:07.205469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.250 [2024-11-19 11:00:07.205967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.250 [2024-11-19 11:00:07.205982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.250 [2024-11-19 11:00:07.205988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.250 [2024-11-19 11:00:07.206137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.250 [2024-11-19 11:00:07.206293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.250 [2024-11-19 11:00:07.206300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.250 [2024-11-19 11:00:07.206305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.250 [2024-11-19 11:00:07.206311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.250 [2024-11-19 11:00:07.218123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.250 [2024-11-19 11:00:07.218608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.250 [2024-11-19 11:00:07.218622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.250 [2024-11-19 11:00:07.218628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.250 [2024-11-19 11:00:07.218776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.250 [2024-11-19 11:00:07.218925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.250 [2024-11-19 11:00:07.218931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.250 [2024-11-19 11:00:07.218937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.250 [2024-11-19 11:00:07.218942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.250 [2024-11-19 11:00:07.230765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.250 [2024-11-19 11:00:07.231372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.250 [2024-11-19 11:00:07.231404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.250 [2024-11-19 11:00:07.231413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.250 [2024-11-19 11:00:07.231577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.250 [2024-11-19 11:00:07.231728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.250 [2024-11-19 11:00:07.231735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.250 [2024-11-19 11:00:07.231741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.250 [2024-11-19 11:00:07.231747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.250 [2024-11-19 11:00:07.243434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.250 [2024-11-19 11:00:07.244010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.250 [2024-11-19 11:00:07.244042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.250 [2024-11-19 11:00:07.244053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.250 [2024-11-19 11:00:07.244226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.250 [2024-11-19 11:00:07.244378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.250 [2024-11-19 11:00:07.244386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.250 [2024-11-19 11:00:07.244392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.250 [2024-11-19 11:00:07.244398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.250 [2024-11-19 11:00:07.256085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.250 [2024-11-19 11:00:07.256506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.250 [2024-11-19 11:00:07.256538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.250 [2024-11-19 11:00:07.256547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.250 [2024-11-19 11:00:07.256711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.250 [2024-11-19 11:00:07.256862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.250 [2024-11-19 11:00:07.256870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.250 [2024-11-19 11:00:07.256875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.250 [2024-11-19 11:00:07.256881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.250 [2024-11-19 11:00:07.268710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.250 [2024-11-19 11:00:07.269302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.250 [2024-11-19 11:00:07.269334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.251 [2024-11-19 11:00:07.269343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.251 [2024-11-19 11:00:07.269507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.251 [2024-11-19 11:00:07.269659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.251 [2024-11-19 11:00:07.269666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.251 [2024-11-19 11:00:07.269671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.251 [2024-11-19 11:00:07.269677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1205477 Killed "${NVMF_APP[@]}" "$@" 00:32:28.251 11:00:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:32:28.251 11:00:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:28.251 11:00:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:28.251 11:00:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:28.251 11:00:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:28.251 11:00:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1207263 00:32:28.251 11:00:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1207263 00:32:28.251 [2024-11-19 11:00:07.281368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.251 11:00:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:28.251 11:00:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1207263 ']' 00:32:28.251 11:00:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.251 [2024-11-19 11:00:07.281947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.251 [2024-11-19 11:00:07.281979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.251 [2024-11-19 11:00:07.281988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.251 [2024-11-19 11:00:07.282152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.251 11:00:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:28.251 [2024-11-19 11:00:07.282311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.251 [2024-11-19 11:00:07.282319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.251 [2024-11-19 11:00:07.282325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.251 [2024-11-19 11:00:07.282331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.251 11:00:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.251 11:00:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:28.251 11:00:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:28.251 [2024-11-19 11:00:07.294028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.251 [2024-11-19 11:00:07.294519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.251 [2024-11-19 11:00:07.294535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.251 [2024-11-19 11:00:07.294541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.251 [2024-11-19 11:00:07.294690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.251 [2024-11-19 11:00:07.294839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.251 [2024-11-19 11:00:07.294845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.251 [2024-11-19 11:00:07.294851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.251 [2024-11-19 11:00:07.294856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.251 [2024-11-19 11:00:07.306686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.251 [2024-11-19 11:00:07.307262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.251 [2024-11-19 11:00:07.307294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.251 [2024-11-19 11:00:07.307303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.251 [2024-11-19 11:00:07.307472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.251 [2024-11-19 11:00:07.307624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.251 [2024-11-19 11:00:07.307630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.251 [2024-11-19 11:00:07.307637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.251 [2024-11-19 11:00:07.307643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.251 [2024-11-19 11:00:07.319338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.251 [2024-11-19 11:00:07.319874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.251 [2024-11-19 11:00:07.319906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.251 [2024-11-19 11:00:07.319915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.251 [2024-11-19 11:00:07.320081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.251 [2024-11-19 11:00:07.320238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.251 [2024-11-19 11:00:07.320246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.251 [2024-11-19 11:00:07.320252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.251 [2024-11-19 11:00:07.320258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.251 [2024-11-19 11:00:07.331936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.251 [2024-11-19 11:00:07.332523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.251 [2024-11-19 11:00:07.332556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.251 [2024-11-19 11:00:07.332565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.251 [2024-11-19 11:00:07.332729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.251 [2024-11-19 11:00:07.332881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.251 [2024-11-19 11:00:07.332888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.251 [2024-11-19 11:00:07.332893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.251 [2024-11-19 11:00:07.332899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.251 [2024-11-19 11:00:07.339342] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:32:28.251 [2024-11-19 11:00:07.339407] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:28.251 [2024-11-19 11:00:07.344588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.251 [2024-11-19 11:00:07.345187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.251 [2024-11-19 11:00:07.345219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.251 [2024-11-19 11:00:07.345228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.251 [2024-11-19 11:00:07.345398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.251 [2024-11-19 11:00:07.345550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.251 [2024-11-19 11:00:07.345557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.251 [2024-11-19 11:00:07.345562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.251 [2024-11-19 11:00:07.345568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.251 [2024-11-19 11:00:07.357264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.251 [2024-11-19 11:00:07.357763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.251 [2024-11-19 11:00:07.357778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.251 [2024-11-19 11:00:07.357784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.251 [2024-11-19 11:00:07.357933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.251 [2024-11-19 11:00:07.358083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.251 [2024-11-19 11:00:07.358089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.251 [2024-11-19 11:00:07.358095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.251 [2024-11-19 11:00:07.358101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.251 [2024-11-19 11:00:07.369917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.251 [2024-11-19 11:00:07.370360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.251 [2024-11-19 11:00:07.370375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.251 [2024-11-19 11:00:07.370381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.252 [2024-11-19 11:00:07.370530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.252 [2024-11-19 11:00:07.370679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.252 [2024-11-19 11:00:07.370686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.252 [2024-11-19 11:00:07.370691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.252 [2024-11-19 11:00:07.370696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.252 [2024-11-19 11:00:07.382579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.252 [2024-11-19 11:00:07.383031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.252 [2024-11-19 11:00:07.383046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.252 [2024-11-19 11:00:07.383051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.252 [2024-11-19 11:00:07.383203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.252 [2024-11-19 11:00:07.383353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.252 [2024-11-19 11:00:07.383364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.252 [2024-11-19 11:00:07.383369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.252 [2024-11-19 11:00:07.383375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.252 [2024-11-19 11:00:07.395194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.252 [2024-11-19 11:00:07.395643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.252 [2024-11-19 11:00:07.395656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.252 [2024-11-19 11:00:07.395662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.252 [2024-11-19 11:00:07.395810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.252 [2024-11-19 11:00:07.395959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.252 [2024-11-19 11:00:07.395965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.252 [2024-11-19 11:00:07.395971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.252 [2024-11-19 11:00:07.395977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.252 [2024-11-19 11:00:07.407795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.252 [2024-11-19 11:00:07.408286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.252 [2024-11-19 11:00:07.408319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.252 [2024-11-19 11:00:07.408328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.252 [2024-11-19 11:00:07.408495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.252 [2024-11-19 11:00:07.408647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.252 [2024-11-19 11:00:07.408654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.252 [2024-11-19 11:00:07.408660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.252 [2024-11-19 11:00:07.408666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.252 [2024-11-19 11:00:07.420373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.252 [2024-11-19 11:00:07.420977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.252 [2024-11-19 11:00:07.421009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.252 [2024-11-19 11:00:07.421018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.252 [2024-11-19 11:00:07.421190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.252 [2024-11-19 11:00:07.421343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.252 [2024-11-19 11:00:07.421350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.252 [2024-11-19 11:00:07.421356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.252 [2024-11-19 11:00:07.421362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.252 [2024-11-19 11:00:07.429505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:28.252 [2024-11-19 11:00:07.433054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.252 [2024-11-19 11:00:07.433635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.252 [2024-11-19 11:00:07.433668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.252 [2024-11-19 11:00:07.433677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.252 [2024-11-19 11:00:07.433841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.252 [2024-11-19 11:00:07.433993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.252 [2024-11-19 11:00:07.434000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.252 [2024-11-19 11:00:07.434008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.252 [2024-11-19 11:00:07.434014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.515 [2024-11-19 11:00:07.445713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.515 [2024-11-19 11:00:07.446233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.515 [2024-11-19 11:00:07.446249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.515 [2024-11-19 11:00:07.446255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.515 [2024-11-19 11:00:07.446404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.515 [2024-11-19 11:00:07.446554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.515 [2024-11-19 11:00:07.446561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.515 [2024-11-19 11:00:07.446568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.515 [2024-11-19 11:00:07.446574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.515 [2024-11-19 11:00:07.458298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.515 [2024-11-19 11:00:07.458363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:28.515 [2024-11-19 11:00:07.458383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:28.515 [2024-11-19 11:00:07.458390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:28.515 [2024-11-19 11:00:07.458396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:28.515 [2024-11-19 11:00:07.458401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:28.515 [2024-11-19 11:00:07.458765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.515 [2024-11-19 11:00:07.458778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.515 [2024-11-19 11:00:07.458784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.515 [2024-11-19 11:00:07.458934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.515 [2024-11-19 11:00:07.459082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.515 [2024-11-19 11:00:07.459094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.515 [2024-11-19 11:00:07.459100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.515 [2024-11-19 11:00:07.459105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.515 [2024-11-19 11:00:07.459477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:28.515 [2024-11-19 11:00:07.459681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.515 [2024-11-19 11:00:07.459682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:28.515 [2024-11-19 11:00:07.470939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.515 [2024-11-19 11:00:07.471409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.515 [2024-11-19 11:00:07.471423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.515 [2024-11-19 11:00:07.471430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.515 [2024-11-19 11:00:07.471578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.515 [2024-11-19 11:00:07.471728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.515 [2024-11-19 11:00:07.471734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.515 [2024-11-19 11:00:07.471740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.515 [2024-11-19 11:00:07.471745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.515 [2024-11-19 11:00:07.483572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.515 [2024-11-19 11:00:07.484182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.515 [2024-11-19 11:00:07.484218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.515 [2024-11-19 11:00:07.484227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.515 [2024-11-19 11:00:07.484398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.515 [2024-11-19 11:00:07.484550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.515 [2024-11-19 11:00:07.484557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.515 [2024-11-19 11:00:07.484563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.515 [2024-11-19 11:00:07.484570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.515 [2024-11-19 11:00:07.496284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.515 [2024-11-19 11:00:07.496863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.515 [2024-11-19 11:00:07.496897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.515 [2024-11-19 11:00:07.496906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.515 [2024-11-19 11:00:07.497072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.515 [2024-11-19 11:00:07.497235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.515 [2024-11-19 11:00:07.497249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.515 [2024-11-19 11:00:07.497255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.515 [2024-11-19 11:00:07.497261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.515 [2024-11-19 11:00:07.508947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.515 [2024-11-19 11:00:07.509544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.515 [2024-11-19 11:00:07.509577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.516 [2024-11-19 11:00:07.509586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.516 [2024-11-19 11:00:07.509750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.516 [2024-11-19 11:00:07.509902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.516 [2024-11-19 11:00:07.509909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.516 [2024-11-19 11:00:07.509915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.516 [2024-11-19 11:00:07.509922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.516 [2024-11-19 11:00:07.521632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.516 [2024-11-19 11:00:07.522126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.516 [2024-11-19 11:00:07.522142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.516 [2024-11-19 11:00:07.522148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.516 [2024-11-19 11:00:07.522301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.516 [2024-11-19 11:00:07.522451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.516 [2024-11-19 11:00:07.522458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.516 [2024-11-19 11:00:07.522463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.516 [2024-11-19 11:00:07.522468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.516 [2024-11-19 11:00:07.534292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.516 [2024-11-19 11:00:07.534843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.516 [2024-11-19 11:00:07.534876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.516 [2024-11-19 11:00:07.534885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.516 [2024-11-19 11:00:07.535049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.516 [2024-11-19 11:00:07.535207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.516 [2024-11-19 11:00:07.535215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.516 [2024-11-19 11:00:07.535220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.516 [2024-11-19 11:00:07.535227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.516 [2024-11-19 11:00:07.546918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.516 [2024-11-19 11:00:07.547517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.516 [2024-11-19 11:00:07.547549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.516 [2024-11-19 11:00:07.547558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.516 [2024-11-19 11:00:07.547722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.516 [2024-11-19 11:00:07.547874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.516 [2024-11-19 11:00:07.547881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.516 [2024-11-19 11:00:07.547887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.516 [2024-11-19 11:00:07.547894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.516 [2024-11-19 11:00:07.559601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.516 [2024-11-19 11:00:07.560221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.516 [2024-11-19 11:00:07.560253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.516 [2024-11-19 11:00:07.560263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.516 [2024-11-19 11:00:07.560429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.516 [2024-11-19 11:00:07.560581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.516 [2024-11-19 11:00:07.560588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.516 [2024-11-19 11:00:07.560594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.516 [2024-11-19 11:00:07.560600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.516 [2024-11-19 11:00:07.572292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.516 [2024-11-19 11:00:07.572749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.516 [2024-11-19 11:00:07.572765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.516 [2024-11-19 11:00:07.572771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.516 [2024-11-19 11:00:07.572919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.516 [2024-11-19 11:00:07.573069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.516 [2024-11-19 11:00:07.573075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.516 [2024-11-19 11:00:07.573081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.516 [2024-11-19 11:00:07.573086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.516 [2024-11-19 11:00:07.584909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.516 [2024-11-19 11:00:07.585478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.516 [2024-11-19 11:00:07.585515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.516 [2024-11-19 11:00:07.585524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.516 [2024-11-19 11:00:07.585688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.516 [2024-11-19 11:00:07.585839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.516 [2024-11-19 11:00:07.585847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.516 [2024-11-19 11:00:07.585853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.516 [2024-11-19 11:00:07.585859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.516 [2024-11-19 11:00:07.597552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.516 [2024-11-19 11:00:07.598217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.516 [2024-11-19 11:00:07.598250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.516 [2024-11-19 11:00:07.598259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.516 [2024-11-19 11:00:07.598425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.516 [2024-11-19 11:00:07.598577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.516 [2024-11-19 11:00:07.598584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.516 [2024-11-19 11:00:07.598590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.516 [2024-11-19 11:00:07.598596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.516 [2024-11-19 11:00:07.610153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.516 [2024-11-19 11:00:07.610629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.516 [2024-11-19 11:00:07.610644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.516 [2024-11-19 11:00:07.610650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.516 [2024-11-19 11:00:07.610799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.517 [2024-11-19 11:00:07.610948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.517 [2024-11-19 11:00:07.610954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.517 [2024-11-19 11:00:07.610959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.517 [2024-11-19 11:00:07.610964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.517 [2024-11-19 11:00:07.622801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.517 [2024-11-19 11:00:07.623221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.517 [2024-11-19 11:00:07.623234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.517 [2024-11-19 11:00:07.623239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.517 [2024-11-19 11:00:07.623391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.517 [2024-11-19 11:00:07.623540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.517 [2024-11-19 11:00:07.623545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.517 [2024-11-19 11:00:07.623551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.517 [2024-11-19 11:00:07.623556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.517 [2024-11-19 11:00:07.635378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.517 [2024-11-19 11:00:07.635928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.517 [2024-11-19 11:00:07.635959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.517 [2024-11-19 11:00:07.635968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.517 [2024-11-19 11:00:07.636133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.517 [2024-11-19 11:00:07.636291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.517 [2024-11-19 11:00:07.636298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.517 [2024-11-19 11:00:07.636304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.517 [2024-11-19 11:00:07.636309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.517 4881.00 IOPS, 19.07 MiB/s [2024-11-19T10:00:07.712Z] [2024-11-19 11:00:07.648042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.517 [2024-11-19 11:00:07.648590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.517 [2024-11-19 11:00:07.648621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.517 [2024-11-19 11:00:07.648630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.517 [2024-11-19 11:00:07.648794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.517 [2024-11-19 11:00:07.648945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.517 [2024-11-19 11:00:07.648951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.517 [2024-11-19 11:00:07.648956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.517 [2024-11-19 11:00:07.648962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.517 [2024-11-19 11:00:07.660663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.517 [2024-11-19 11:00:07.661223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.517 [2024-11-19 11:00:07.661254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.517 [2024-11-19 11:00:07.661263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.517 [2024-11-19 11:00:07.661428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.517 [2024-11-19 11:00:07.661579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.517 [2024-11-19 11:00:07.661591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.517 [2024-11-19 11:00:07.661597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.517 [2024-11-19 11:00:07.661603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.517 [2024-11-19 11:00:07.673296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.517 [2024-11-19 11:00:07.673642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.517 [2024-11-19 11:00:07.673657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.517 [2024-11-19 11:00:07.673663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.517 [2024-11-19 11:00:07.673811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.517 [2024-11-19 11:00:07.673959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.517 [2024-11-19 11:00:07.673965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.517 [2024-11-19 11:00:07.673970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.517 [2024-11-19 11:00:07.673975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.517 [2024-11-19 11:00:07.685941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.517 [2024-11-19 11:00:07.686468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.517 [2024-11-19 11:00:07.686500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.517 [2024-11-19 11:00:07.686509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.517 [2024-11-19 11:00:07.686673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.517 [2024-11-19 11:00:07.686824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.517 [2024-11-19 11:00:07.686830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.517 [2024-11-19 11:00:07.686836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.517 [2024-11-19 11:00:07.686842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.517 [2024-11-19 11:00:07.698535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.517 [2024-11-19 11:00:07.698993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.517 [2024-11-19 11:00:07.699008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.517 [2024-11-19 11:00:07.699014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.517 [2024-11-19 11:00:07.699166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.517 [2024-11-19 11:00:07.699316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.517 [2024-11-19 11:00:07.699322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.517 [2024-11-19 11:00:07.699327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.517 [2024-11-19 11:00:07.699332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.781 [2024-11-19 11:00:07.711163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.781 [2024-11-19 11:00:07.711630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.781 [2024-11-19 11:00:07.711662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.781 [2024-11-19 11:00:07.711671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.781 [2024-11-19 11:00:07.711835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.781 [2024-11-19 11:00:07.711986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.781 [2024-11-19 11:00:07.711992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.781 [2024-11-19 11:00:07.711998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.781 [2024-11-19 11:00:07.712004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.781 [2024-11-19 11:00:07.723847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.781 [2024-11-19 11:00:07.724442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.781 [2024-11-19 11:00:07.724473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.781 [2024-11-19 11:00:07.724482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.781 [2024-11-19 11:00:07.724646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.781 [2024-11-19 11:00:07.724797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.781 [2024-11-19 11:00:07.724803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.781 [2024-11-19 11:00:07.724809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.781 [2024-11-19 11:00:07.724815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.781 [2024-11-19 11:00:07.736505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.781 [2024-11-19 11:00:07.736971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.781 [2024-11-19 11:00:07.736986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.781 [2024-11-19 11:00:07.736992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.781 [2024-11-19 11:00:07.737140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.781 [2024-11-19 11:00:07.737294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.781 [2024-11-19 11:00:07.737301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.781 [2024-11-19 11:00:07.737306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.781 [2024-11-19 11:00:07.737311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.781 [2024-11-19 11:00:07.749132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.781 [2024-11-19 11:00:07.749693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.781 [2024-11-19 11:00:07.749729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.781 [2024-11-19 11:00:07.749737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.781 [2024-11-19 11:00:07.749902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.781 [2024-11-19 11:00:07.750053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.781 [2024-11-19 11:00:07.750059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.781 [2024-11-19 11:00:07.750064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.781 [2024-11-19 11:00:07.750070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.781 [2024-11-19 11:00:07.761774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.781 [2024-11-19 11:00:07.762245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.781 [2024-11-19 11:00:07.762276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.781 [2024-11-19 11:00:07.762285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.781 [2024-11-19 11:00:07.762449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.781 [2024-11-19 11:00:07.762600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.781 [2024-11-19 11:00:07.762606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.781 [2024-11-19 11:00:07.762612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.781 [2024-11-19 11:00:07.762617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.781 [2024-11-19 11:00:07.774452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.781 [2024-11-19 11:00:07.775021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.781 [2024-11-19 11:00:07.775052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.781 [2024-11-19 11:00:07.775061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.781 [2024-11-19 11:00:07.775231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.781 [2024-11-19 11:00:07.775383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.781 [2024-11-19 11:00:07.775389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.781 [2024-11-19 11:00:07.775395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.781 [2024-11-19 11:00:07.775401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.782 [2024-11-19 11:00:07.787088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.782 [2024-11-19 11:00:07.787715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.782 [2024-11-19 11:00:07.787746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.782 [2024-11-19 11:00:07.787755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.782 [2024-11-19 11:00:07.787926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.782 [2024-11-19 11:00:07.788077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.782 [2024-11-19 11:00:07.788083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.782 [2024-11-19 11:00:07.788089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.782 [2024-11-19 11:00:07.788095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.782 [2024-11-19 11:00:07.799787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.782 [2024-11-19 11:00:07.800305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.782 [2024-11-19 11:00:07.800337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.782 [2024-11-19 11:00:07.800345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.782 [2024-11-19 11:00:07.800512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.782 [2024-11-19 11:00:07.800663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.782 [2024-11-19 11:00:07.800669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.782 [2024-11-19 11:00:07.800674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.782 [2024-11-19 11:00:07.800680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.782 [2024-11-19 11:00:07.812374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.782 [2024-11-19 11:00:07.812842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.782 [2024-11-19 11:00:07.812857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.782 [2024-11-19 11:00:07.812863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.782 [2024-11-19 11:00:07.813011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.782 [2024-11-19 11:00:07.813165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.782 [2024-11-19 11:00:07.813172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.782 [2024-11-19 11:00:07.813177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.782 [2024-11-19 11:00:07.813182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.782 [2024-11-19 11:00:07.825005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.782 [2024-11-19 11:00:07.825464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.782 [2024-11-19 11:00:07.825494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.782 [2024-11-19 11:00:07.825503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.782 [2024-11-19 11:00:07.825668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.782 [2024-11-19 11:00:07.825820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.782 [2024-11-19 11:00:07.825826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.782 [2024-11-19 11:00:07.825835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.782 [2024-11-19 11:00:07.825841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.782 [2024-11-19 11:00:07.837678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.782 [2024-11-19 11:00:07.838239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.782 [2024-11-19 11:00:07.838269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.782 [2024-11-19 11:00:07.838278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.782 [2024-11-19 11:00:07.838444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.782 [2024-11-19 11:00:07.838596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.782 [2024-11-19 11:00:07.838603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.782 [2024-11-19 11:00:07.838609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.782 [2024-11-19 11:00:07.838615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.782 [2024-11-19 11:00:07.850312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.782 [2024-11-19 11:00:07.850712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.782 [2024-11-19 11:00:07.850743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.782 [2024-11-19 11:00:07.850751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.782 [2024-11-19 11:00:07.850918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.782 [2024-11-19 11:00:07.851069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.782 [2024-11-19 11:00:07.851075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.782 [2024-11-19 11:00:07.851080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.782 [2024-11-19 11:00:07.851086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.782 [2024-11-19 11:00:07.862927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.782 [2024-11-19 11:00:07.863406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.782 [2024-11-19 11:00:07.863438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.782 [2024-11-19 11:00:07.863447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.782 [2024-11-19 11:00:07.863613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.782 [2024-11-19 11:00:07.863764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.782 [2024-11-19 11:00:07.863770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.782 [2024-11-19 11:00:07.863776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.782 [2024-11-19 11:00:07.863782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.782 [2024-11-19 11:00:07.875622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.782 [2024-11-19 11:00:07.876011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.782 [2024-11-19 11:00:07.876041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.782 [2024-11-19 11:00:07.876050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.782 [2024-11-19 11:00:07.876221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.782 [2024-11-19 11:00:07.876372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.782 [2024-11-19 11:00:07.876378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.782 [2024-11-19 11:00:07.876384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.782 [2024-11-19 11:00:07.876390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.782 [2024-11-19 11:00:07.888222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.782 [2024-11-19 11:00:07.888697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.782 [2024-11-19 11:00:07.888728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.782 [2024-11-19 11:00:07.888737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.782 [2024-11-19 11:00:07.888901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.782 [2024-11-19 11:00:07.889052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.782 [2024-11-19 11:00:07.889058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.782 [2024-11-19 11:00:07.889063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.782 [2024-11-19 11:00:07.889069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.782 [2024-11-19 11:00:07.900900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.782 [2024-11-19 11:00:07.901347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.782 [2024-11-19 11:00:07.901362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.782 [2024-11-19 11:00:07.901368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.782 [2024-11-19 11:00:07.901516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.782 [2024-11-19 11:00:07.901665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.782 [2024-11-19 11:00:07.901671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.782 [2024-11-19 11:00:07.901676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.783 [2024-11-19 11:00:07.901681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.783 [2024-11-19 11:00:07.913504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.783 [2024-11-19 11:00:07.914071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.783 [2024-11-19 11:00:07.914101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.783 [2024-11-19 11:00:07.914113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.783 [2024-11-19 11:00:07.914284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.783 [2024-11-19 11:00:07.914436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.783 [2024-11-19 11:00:07.914442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.783 [2024-11-19 11:00:07.914447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.783 [2024-11-19 11:00:07.914453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.783 [2024-11-19 11:00:07.926150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.783 [2024-11-19 11:00:07.926623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.783 [2024-11-19 11:00:07.926638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.783 [2024-11-19 11:00:07.926644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.783 [2024-11-19 11:00:07.926793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.783 [2024-11-19 11:00:07.926941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.783 [2024-11-19 11:00:07.926947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.783 [2024-11-19 11:00:07.926952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.783 [2024-11-19 11:00:07.926957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.783 [2024-11-19 11:00:07.938779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.783 [2024-11-19 11:00:07.939191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.783 [2024-11-19 11:00:07.939204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.783 [2024-11-19 11:00:07.939210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.783 [2024-11-19 11:00:07.939358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.783 [2024-11-19 11:00:07.939506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.783 [2024-11-19 11:00:07.939512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.783 [2024-11-19 11:00:07.939517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.783 [2024-11-19 11:00:07.939521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.783 [2024-11-19 11:00:07.951349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.783 [2024-11-19 11:00:07.951757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.783 [2024-11-19 11:00:07.951769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.783 [2024-11-19 11:00:07.951775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.783 [2024-11-19 11:00:07.951923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.783 [2024-11-19 11:00:07.952074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.783 [2024-11-19 11:00:07.952080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.783 [2024-11-19 11:00:07.952085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.783 [2024-11-19 11:00:07.952090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.783 [2024-11-19 11:00:07.963960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.783 [2024-11-19 11:00:07.964409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.783 [2024-11-19 11:00:07.964423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:28.783 [2024-11-19 11:00:07.964428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:28.783 [2024-11-19 11:00:07.964577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:28.783 [2024-11-19 11:00:07.964725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.783 [2024-11-19 11:00:07.964731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.783 [2024-11-19 11:00:07.964736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.783 [2024-11-19 11:00:07.964741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.046 [2024-11-19 11:00:07.976567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.046 [2024-11-19 11:00:07.976978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.046 [2024-11-19 11:00:07.976990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.046 [2024-11-19 11:00:07.976996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.046 [2024-11-19 11:00:07.977143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.046 [2024-11-19 11:00:07.977295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.046 [2024-11-19 11:00:07.977301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.046 [2024-11-19 11:00:07.977306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.046 [2024-11-19 11:00:07.977311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.046 [2024-11-19 11:00:07.989131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.046 [2024-11-19 11:00:07.989467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.046 [2024-11-19 11:00:07.989479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.046 [2024-11-19 11:00:07.989484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.046 [2024-11-19 11:00:07.989632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.046 [2024-11-19 11:00:07.989780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.046 [2024-11-19 11:00:07.989785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.046 [2024-11-19 11:00:07.989794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.046 [2024-11-19 11:00:07.989799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.046 [2024-11-19 11:00:08.001791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.046 [2024-11-19 11:00:08.002275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.046 [2024-11-19 11:00:08.002307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.046 [2024-11-19 11:00:08.002315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.046 [2024-11-19 11:00:08.002480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.046 [2024-11-19 11:00:08.002631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.046 [2024-11-19 11:00:08.002637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.046 [2024-11-19 11:00:08.002643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.046 [2024-11-19 11:00:08.002648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.046 [2024-11-19 11:00:08.014485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.046 [2024-11-19 11:00:08.015046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.046 [2024-11-19 11:00:08.015077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.046 [2024-11-19 11:00:08.015086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.046 [2024-11-19 11:00:08.015257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.046 [2024-11-19 11:00:08.015408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.046 [2024-11-19 11:00:08.015414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.046 [2024-11-19 11:00:08.015420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.046 [2024-11-19 11:00:08.015426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.046 [2024-11-19 11:00:08.027123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.046 [2024-11-19 11:00:08.027656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.046 [2024-11-19 11:00:08.027687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.046 [2024-11-19 11:00:08.027696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.046 [2024-11-19 11:00:08.027860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.046 [2024-11-19 11:00:08.028010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.046 [2024-11-19 11:00:08.028017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.046 [2024-11-19 11:00:08.028022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.046 [2024-11-19 11:00:08.028028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.046 [2024-11-19 11:00:08.039726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.046 [2024-11-19 11:00:08.040413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.046 [2024-11-19 11:00:08.040444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.046 [2024-11-19 11:00:08.040452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.046 [2024-11-19 11:00:08.040617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.046 [2024-11-19 11:00:08.040768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.046 [2024-11-19 11:00:08.040774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.046 [2024-11-19 11:00:08.040780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.046 [2024-11-19 11:00:08.040786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.046 [2024-11-19 11:00:08.052346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.046 [2024-11-19 11:00:08.052814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.046 [2024-11-19 11:00:08.052829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.046 [2024-11-19 11:00:08.052836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.046 [2024-11-19 11:00:08.052984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.046 [2024-11-19 11:00:08.053133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.046 [2024-11-19 11:00:08.053140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.046 [2024-11-19 11:00:08.053145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.046 [2024-11-19 11:00:08.053150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.046 [2024-11-19 11:00:08.064978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.046 [2024-11-19 11:00:08.065291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.046 [2024-11-19 11:00:08.065305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.047 [2024-11-19 11:00:08.065310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.047 [2024-11-19 11:00:08.065459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.047 [2024-11-19 11:00:08.065608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.047 [2024-11-19 11:00:08.065613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.047 [2024-11-19 11:00:08.065618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.047 [2024-11-19 11:00:08.065623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.047 [2024-11-19 11:00:08.077588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.047 [2024-11-19 11:00:08.078024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.047 [2024-11-19 11:00:08.078036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.047 [2024-11-19 11:00:08.078047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.047 [2024-11-19 11:00:08.078199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.047 [2024-11-19 11:00:08.078348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.047 [2024-11-19 11:00:08.078354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.047 [2024-11-19 11:00:08.078360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.047 [2024-11-19 11:00:08.078365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.047 [2024-11-19 11:00:08.090227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.047 [2024-11-19 11:00:08.090694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.047 [2024-11-19 11:00:08.090707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.047 [2024-11-19 11:00:08.090712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.047 [2024-11-19 11:00:08.090860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.047 [2024-11-19 11:00:08.091009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.047 [2024-11-19 11:00:08.091015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.047 [2024-11-19 11:00:08.091020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.047 [2024-11-19 11:00:08.091025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.047 [2024-11-19 11:00:08.102848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.047 [2024-11-19 11:00:08.103417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.047 [2024-11-19 11:00:08.103448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.047 [2024-11-19 11:00:08.103457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.047 [2024-11-19 11:00:08.103623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.047 [2024-11-19 11:00:08.103774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.047 [2024-11-19 11:00:08.103780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.047 [2024-11-19 11:00:08.103785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.047 [2024-11-19 11:00:08.103791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.047 [2024-11-19 11:00:08.115485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.047 [2024-11-19 11:00:08.115954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.047 [2024-11-19 11:00:08.115969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.047 [2024-11-19 11:00:08.115974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.047 [2024-11-19 11:00:08.116122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.047 [2024-11-19 11:00:08.116280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.047 [2024-11-19 11:00:08.116286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.047 [2024-11-19 11:00:08.116291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.047 [2024-11-19 11:00:08.116296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.047 [2024-11-19 11:00:08.128126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.047 [2024-11-19 11:00:08.128703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.047 [2024-11-19 11:00:08.128735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.047 [2024-11-19 11:00:08.128743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.047 [2024-11-19 11:00:08.128907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.047 [2024-11-19 11:00:08.129058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.047 [2024-11-19 11:00:08.129065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.047 [2024-11-19 11:00:08.129070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.047 [2024-11-19 11:00:08.129076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.047 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:29.047 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:32:29.047 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:29.047 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:29.047 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:29.047 [2024-11-19 11:00:08.140771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.047 [2024-11-19 11:00:08.141130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.047 [2024-11-19 11:00:08.141146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.047 [2024-11-19 11:00:08.141151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.047 [2024-11-19 11:00:08.141305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.047 [2024-11-19 11:00:08.141453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.047 [2024-11-19 11:00:08.141459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.047 [2024-11-19 11:00:08.141464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.047 [2024-11-19 11:00:08.141469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.047 [2024-11-19 11:00:08.153441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.047 [2024-11-19 11:00:08.153905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.047 [2024-11-19 11:00:08.153918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.047 [2024-11-19 11:00:08.153923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.047 [2024-11-19 11:00:08.154075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.047 [2024-11-19 11:00:08.154228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.048 [2024-11-19 11:00:08.154235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.048 [2024-11-19 11:00:08.154240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.048 [2024-11-19 11:00:08.154245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.048 [2024-11-19 11:00:08.166060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.048 [2024-11-19 11:00:08.166414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.048 [2024-11-19 11:00:08.166428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.048 [2024-11-19 11:00:08.166434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.048 [2024-11-19 11:00:08.166582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.048 [2024-11-19 11:00:08.166730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.048 [2024-11-19 11:00:08.166736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.048 [2024-11-19 11:00:08.166741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.048 [2024-11-19 11:00:08.166745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.048 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:29.048 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:29.048 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.048 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:29.048 [2024-11-19 11:00:08.178708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.048 [2024-11-19 11:00:08.179052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.048 [2024-11-19 11:00:08.179065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.048 [2024-11-19 11:00:08.179070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.048 [2024-11-19 11:00:08.179222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.048 [2024-11-19 11:00:08.179371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.048 [2024-11-19 11:00:08.179377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.048 [2024-11-19 11:00:08.179382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.048 [2024-11-19 11:00:08.179387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.048 [2024-11-19 11:00:08.183312] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:29.048 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.048 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:29.048 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.048 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:29.048 [2024-11-19 11:00:08.191341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.048 [2024-11-19 11:00:08.191809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.048 [2024-11-19 11:00:08.191821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.048 [2024-11-19 11:00:08.191826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.048 [2024-11-19 11:00:08.191974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.048 [2024-11-19 11:00:08.192122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.048 [2024-11-19 11:00:08.192127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.048 [2024-11-19 11:00:08.192133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.048 [2024-11-19 11:00:08.192137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.048 [2024-11-19 11:00:08.203951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.048 [2024-11-19 11:00:08.204439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.048 [2024-11-19 11:00:08.204470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.048 [2024-11-19 11:00:08.204478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.048 [2024-11-19 11:00:08.204642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.048 [2024-11-19 11:00:08.204793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.048 [2024-11-19 11:00:08.204799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.048 [2024-11-19 11:00:08.204805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.048 [2024-11-19 11:00:08.204810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.048 [2024-11-19 11:00:08.216643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.048 [2024-11-19 11:00:08.217127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.048 [2024-11-19 11:00:08.217142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.048 [2024-11-19 11:00:08.217148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.048 Malloc0 00:32:29.048 [2024-11-19 11:00:08.217350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.048 [2024-11-19 11:00:08.217500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.048 [2024-11-19 11:00:08.217506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.048 [2024-11-19 11:00:08.217511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.048 [2024-11-19 11:00:08.217516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.048 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.048 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:29.048 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.048 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:29.048 [2024-11-19 11:00:08.229207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.048 [2024-11-19 11:00:08.229689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.048 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.048 [2024-11-19 11:00:08.229720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.048 [2024-11-19 11:00:08.229729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.048 [2024-11-19 11:00:08.229894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.048 [2024-11-19 11:00:08.230045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.048 [2024-11-19 11:00:08.230051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.048 [2024-11-19 11:00:08.230057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.048 [2024-11-19 11:00:08.230062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.048 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:29.048 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.048 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:29.309 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.309 [2024-11-19 11:00:08.241890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.309 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:29.309 [2024-11-19 11:00:08.242229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.309 [2024-11-19 11:00:08.242245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7de000 with addr=10.0.0.2, port=4420 00:32:29.309 [2024-11-19 11:00:08.242250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7de000 is same with the state(6) to be set 00:32:29.309 [2024-11-19 11:00:08.242399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de000 (9): Bad file descriptor 00:32:29.309 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.309 [2024-11-19 11:00:08.242548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.309 [2024-11-19 11:00:08.242554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.309 [2024-11-19 11:00:08.242559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.309 [2024-11-19 11:00:08.242564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.309 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:29.309 [2024-11-19 11:00:08.248859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:29.309 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.309 11:00:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1206207 00:32:29.309 [2024-11-19 11:00:08.254533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.309 [2024-11-19 11:00:08.318028] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:32:30.525 4775.57 IOPS, 18.65 MiB/s [2024-11-19T10:00:10.662Z] 5783.25 IOPS, 22.59 MiB/s [2024-11-19T10:00:12.046Z] 6578.67 IOPS, 25.70 MiB/s [2024-11-19T10:00:12.987Z] 7211.30 IOPS, 28.17 MiB/s [2024-11-19T10:00:13.928Z] 7730.09 IOPS, 30.20 MiB/s [2024-11-19T10:00:14.868Z] 8182.33 IOPS, 31.96 MiB/s [2024-11-19T10:00:15.812Z] 8552.92 IOPS, 33.41 MiB/s [2024-11-19T10:00:16.754Z] 8868.71 IOPS, 34.64 MiB/s 00:32:37.559 Latency(us) 00:32:37.559 [2024-11-19T10:00:16.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.559 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:37.559 Verification LBA range: start 0x0 length 0x4000 00:32:37.559 Nvme1n1 : 15.00 9145.45 35.72 13494.92 0.00 5635.04 546.13 15947.09 00:32:37.559 [2024-11-19T10:00:16.754Z] =================================================================================================================== 00:32:37.559 [2024-11-19T10:00:16.754Z] Total : 9145.45 35.72 13494.92 0.00 5635.04 546.13 15947.09 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:37.820 rmmod nvme_tcp 00:32:37.820 rmmod nvme_fabrics 00:32:37.820 rmmod nvme_keyring 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1207263 ']' 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1207263 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1207263 ']' 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1207263 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1207263 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1207263' 00:32:37.820 killing process with pid 1207263 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1207263 00:32:37.820 11:00:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1207263 00:32:38.081 11:00:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:38.081 11:00:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:38.081 11:00:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:38.081 11:00:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:32:38.081 11:00:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:32:38.081 11:00:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:38.081 11:00:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:32:38.081 11:00:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:38.081 11:00:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:38.081 11:00:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.081 11:00:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.081 11:00:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.005 11:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:40.005 00:32:40.005 real 0m28.372s 00:32:40.005 user 1m3.665s 00:32:40.005 sys 0m7.658s 00:32:40.005 11:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:40.005 11:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:40.005 ************************************ 00:32:40.005 END TEST nvmf_bdevperf 00:32:40.005 ************************************ 00:32:40.005 11:00:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:40.006 11:00:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:40.006 11:00:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:40.006 11:00:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.267 ************************************ 00:32:40.267 START TEST nvmf_target_disconnect 00:32:40.267 ************************************ 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:40.267 * Looking for test storage... 00:32:40.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:40.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.267 --rc genhtml_branch_coverage=1 00:32:40.267 --rc genhtml_function_coverage=1 00:32:40.267 --rc genhtml_legend=1 00:32:40.267 --rc geninfo_all_blocks=1 00:32:40.267 --rc geninfo_unexecuted_blocks=1 00:32:40.267 00:32:40.267 ' 00:32:40.267 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:40.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.267 --rc genhtml_branch_coverage=1 00:32:40.267 --rc genhtml_function_coverage=1 00:32:40.267 --rc genhtml_legend=1 00:32:40.267 --rc geninfo_all_blocks=1 00:32:40.267 --rc geninfo_unexecuted_blocks=1 00:32:40.267 00:32:40.267 ' 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:40.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.268 --rc genhtml_branch_coverage=1 00:32:40.268 --rc genhtml_function_coverage=1 00:32:40.268 --rc genhtml_legend=1 00:32:40.268 --rc geninfo_all_blocks=1 00:32:40.268 --rc geninfo_unexecuted_blocks=1 00:32:40.268 00:32:40.268 ' 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:40.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.268 --rc genhtml_branch_coverage=1 00:32:40.268 --rc genhtml_function_coverage=1 00:32:40.268 --rc genhtml_legend=1 00:32:40.268 --rc geninfo_all_blocks=1 00:32:40.268 --rc geninfo_unexecuted_blocks=1 00:32:40.268 00:32:40.268 ' 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:40.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:32:40.268 11:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:48.409 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:48.409 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:48.409 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:48.409 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:48.409 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:48.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:48.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:32:48.410 00:32:48.410 --- 10.0.0.2 ping statistics --- 00:32:48.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.410 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:48.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:48.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:32:48.410 00:32:48.410 --- 10.0.0.1 ping statistics --- 00:32:48.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.410 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:48.410 ************************************ 00:32:48.410 START TEST nvmf_target_disconnect_tc1 00:32:48.410 ************************************ 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:32:48.410 11:00:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:48.410 [2024-11-19 11:00:27.046168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.410 [2024-11-19 11:00:27.046237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x500ad0 with addr=10.0.0.2, port=4420 00:32:48.410 [2024-11-19 11:00:27.046266] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:48.410 [2024-11-19 11:00:27.046283] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:48.410 [2024-11-19 11:00:27.046291] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:32:48.410 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:32:48.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:48.410 Initializing NVMe Controllers 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:48.410 00:32:48.410 real 0m0.143s 00:32:48.410 user 0m0.060s 00:32:48.410 sys 0m0.083s 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:48.410 ************************************ 00:32:48.410 END TEST nvmf_target_disconnect_tc1 00:32:48.410 ************************************ 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:48.410 ************************************ 00:32:48.410 START TEST nvmf_target_disconnect_tc2 00:32:48.410 ************************************ 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1213788 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1213788 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1213788 ']' 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:48.410 11:00:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.410 [2024-11-19 11:00:27.212611] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:32:48.410 [2024-11-19 11:00:27.212673] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:48.410 [2024-11-19 11:00:27.312419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:48.410 [2024-11-19 11:00:27.365379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:48.410 [2024-11-19 11:00:27.365428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:48.410 [2024-11-19 11:00:27.365437] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:48.411 [2024-11-19 11:00:27.365444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:48.411 [2024-11-19 11:00:27.365450] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:48.411 [2024-11-19 11:00:27.367458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:48.411 [2024-11-19 11:00:27.367618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:48.411 [2024-11-19 11:00:27.367780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:48.411 [2024-11-19 11:00:27.367811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:48.982 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:48.982 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:32:48.982 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:48.982 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:48.982 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.982 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:48.982 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:48.982 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.982 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.982 Malloc0 00:32:48.982 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.983 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:48.983 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.983 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.983 [2024-11-19 11:00:28.134258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:48.983 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.983 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:48.983 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.983 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.983 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.983 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:48.983 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.983 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.983 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.983 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:48.983 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.983 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.983 [2024-11-19 11:00:28.174632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.242 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.242 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:49.242 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.242 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:49.242 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.242 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1214117 00:32:49.242 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:32:49.242 11:00:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:51.172 11:00:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1213788 00:32:51.172 11:00:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Write completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Write completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Write completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Write completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Write completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Write completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Write completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Write completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Write completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Write completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Write completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Write completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 Read completed with error (sct=0, sc=8) 00:32:51.172 starting I/O failed 00:32:51.172 [2024-11-19 11:00:30.213476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.172 [2024-11-19 11:00:30.213842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.172 [2024-11-19 11:00:30.213876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.172 qpair failed and we were unable to recover it. 00:32:51.172 [2024-11-19 11:00:30.214441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.172 [2024-11-19 11:00:30.214497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.172 qpair failed and we were unable to recover it. 00:32:51.172 [2024-11-19 11:00:30.214731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.172 [2024-11-19 11:00:30.214744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.172 qpair failed and we were unable to recover it. 00:32:51.172 [2024-11-19 11:00:30.215049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.172 [2024-11-19 11:00:30.215063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.172 qpair failed and we were unable to recover it. 00:32:51.172 [2024-11-19 11:00:30.215565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.172 [2024-11-19 11:00:30.215622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.172 qpair failed and we were unable to recover it. 00:32:51.172 [2024-11-19 11:00:30.216041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.216055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.216413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.216426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.216534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.216547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.216799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.216811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.217123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.217135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.217519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.217532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.217853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.217864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.218234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.218246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.218612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.218623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.218830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.218841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.219183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.219196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.219588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.219600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.219919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.219930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.220279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.220291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.220603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.220614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.220963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.220974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.221303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.221315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.221620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.221632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.221961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.221972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.222303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.222314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.222614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.222626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.222931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.222942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.223147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.223161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.223496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.223508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.223851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.223863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.224218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.224229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.224551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.224570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.224873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.224884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.225216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.225228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.225487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.225499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.225784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.225795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.225969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.225981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.226181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.226194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.226520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.226531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.226837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.226849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.227198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.227209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.227530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.173 [2024-11-19 11:00:30.227541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.173 qpair failed and we were unable to recover it. 00:32:51.173 [2024-11-19 11:00:30.227899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.227910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.228208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.228219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.228522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.228531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.228859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.228869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.229101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.229111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.229416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.229427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.229728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.229738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.230067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.230085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.230437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.230448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.230764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.230775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.231149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.231174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.231485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.231496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.231836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.231846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.232235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.232245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.232570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.232581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.232893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.232902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.233231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.233242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.233581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.233592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.233900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.233910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.234303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.234315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.234603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.234612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.234826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.234836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.235140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.235152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.235468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.235478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.235814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.235825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.236205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.236216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.236511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.236521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.236833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.236843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.237143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.237152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.237544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.237558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.237768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.237778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.238084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.238094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.238416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.238426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.238807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.238819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.239176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.239187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.239556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.239567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.239926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.239936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.174 qpair failed and we were unable to recover it. 00:32:51.174 [2024-11-19 11:00:30.240305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.174 [2024-11-19 11:00:30.240317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.240632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.240641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.240957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.240967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.241329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.241339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.241643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.241653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.241919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.241929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.242229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.242240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.242565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.242575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.242893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.242904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.243233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.243243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.243570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.243580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.243876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.243886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.244208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.244219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.244557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.244567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.244883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.244896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.245217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.245232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.245538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.245550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.245867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.245880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.246179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.246192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.246622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.246635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.247009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.247022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.247340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.247353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.247667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.247680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.248016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.248029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.248344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.248357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.248711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.248724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.248937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.248949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.249308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.249321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.249623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.249635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.249963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.249975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.250279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.250294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.250611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.250623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.250921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.250945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.251266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.251280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.251582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.251594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.251921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.251933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.175 [2024-11-19 11:00:30.252257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.175 [2024-11-19 11:00:30.252270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.175 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.252639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.252651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.252953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.252975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.253140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.253154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.253494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.253507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.253838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.253851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.254203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.254216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.254622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.254636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.254956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.254969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.255183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.255202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.255533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.255550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.255879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.255897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.256196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.256214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.256542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.256559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.256887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.256903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.257115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.257135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.257474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.257492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.257834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.257852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.258175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.258192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.258526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.258543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.258859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.258875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.259282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.259300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.259624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.259640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.259980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.259997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.260315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.260332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.260741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.260757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.261103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.261120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.261326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.261345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.261683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.261699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.262036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.262054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.262385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.262403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.262715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.262732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.263040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.263057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.263381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.263400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.263742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.263760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.264101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.264118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.264431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.264454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.264792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.264810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.176 qpair failed and we were unable to recover it. 00:32:51.176 [2024-11-19 11:00:30.264985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.176 [2024-11-19 11:00:30.265003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.265315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.265332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.265688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.265705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.265904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.265921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.266302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.266324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.266641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.266661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.266999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.267020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.267381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.267404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.267745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.267765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.268091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.268113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.268439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.268461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.268787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.268808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.269156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.269196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.269530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.269558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.269885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.269906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.270230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.270252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.270586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.270606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.270942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.270965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.271298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.271320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.271558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.271578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.271900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.271920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.272237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.272259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.272641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.272661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.272913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.272934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.273282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.273304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.273637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.273657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.274003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.274024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.274242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.274266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.274628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.274649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.274978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.275000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.275226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.275248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.275611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.275631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.177 qpair failed and we were unable to recover it. 00:32:51.177 [2024-11-19 11:00:30.275972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.177 [2024-11-19 11:00:30.275994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.276336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.276358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.276714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.276742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.277111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.277139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.277490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.277526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.277777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.277809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.278210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.278246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.278643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.278671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.279033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.279062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.279411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.279441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.279809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.279837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.280202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.280232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.280588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.280616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.280949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.280978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.281366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.281395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.281753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.281781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.282137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.282174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.282537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.282565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.282924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.282953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.283321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.283351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.283713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.283742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.284101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.284129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.284465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.284501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.284865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.284894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.285245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.285276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.285597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.285625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.285990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.286018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.286386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.286415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.286775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.286803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.287177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.287207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.287581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.287609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.287968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.287998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.288277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.288306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.288571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.288605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.288957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.288990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.289333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.289362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.289717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.289745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.290104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.178 [2024-11-19 11:00:30.290132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.178 qpair failed and we were unable to recover it. 00:32:51.178 [2024-11-19 11:00:30.290507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.290536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.290894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.290923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.291269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.291299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.291659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.291688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.292061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.292091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.292458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.292489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.292847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.292875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.293263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.293293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.293655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.293696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.294051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.294079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.294512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.294542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.294914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.294942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.295304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.295333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.295582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.295610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.295931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.295958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.296322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.296352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.296698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.296728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.297102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.297130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.297464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.297497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.297823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.297850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.298197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.298227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.298600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.298630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.299005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.299034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.299395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.299425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.299783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.299813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.300182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.300214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.300583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.300611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.300964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.300993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.301389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.301420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.301768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.301797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.302148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.302186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.302528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.302558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.302930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.302958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.303331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.303362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.304494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.304546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.304972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.305007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.305422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.305459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.179 [2024-11-19 11:00:30.305840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.179 [2024-11-19 11:00:30.305869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.179 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.306214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.306246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.306617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.306647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.307029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.307058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.307430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.307461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.307817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.307845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.308231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.308262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.308630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.308659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.309003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.309032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.309393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.309424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.309783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.309812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.310151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.310196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.310565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.310595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.311013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.311043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.311405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.311436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.311879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.311908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.312247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.312277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.312660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.312689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.313046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.313074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.313456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.313487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.313824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.313853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.314211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.314243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.314652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.314680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.315016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.315046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.315412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.315442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.315812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.315845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.316201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.316231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.316586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.316615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.316973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.317001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.317368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.317400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.317646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.317675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.318037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.318066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.318410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.318440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.318804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.318832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.319127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.319155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.319514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.319543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.319888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.319918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.320272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.180 [2024-11-19 11:00:30.320301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.180 qpair failed and we were unable to recover it. 00:32:51.180 [2024-11-19 11:00:30.320665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.320695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.321053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.321081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.321422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.321453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.321714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.321746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.322112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.322140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.322479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.322509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.322870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.322900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.323263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.323292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.323672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.323700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.323999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.324027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.324373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.324403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.324765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.324795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.325155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.325209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.325593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.325622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.325990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.326018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.326239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.326272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.326648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.326677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.327052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.327080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.327428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.327459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.327805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.327833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.328195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.328225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.328584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.328612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.328980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.329008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.329385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.329415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.329759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.329787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.330180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.330210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.330612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.330640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.330997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.331026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.331391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.331421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.331796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.331824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.332192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.332221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.332586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.332614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.333010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.181 [2024-11-19 11:00:30.333040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.181 qpair failed and we were unable to recover it. 00:32:51.181 [2024-11-19 11:00:30.333379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.333409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.333658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.333690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.334035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.334064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.334447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.334477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.334833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.334861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.335244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.335274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.335643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.335671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.336038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.336073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.336423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.336453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.336835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.336864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.337241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.337270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.337646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.337674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.338101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.338129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.338522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.338551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.339019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.339048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.339405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.339437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.339776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.339805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.340181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.340211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.340617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.340646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.340989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.341017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.341389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.341418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.341824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.341853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.342212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.342242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.342591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.342620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.343046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.343074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.343509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.343539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.343886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.343916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.344263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.344293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.344647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.344676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.344906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.344938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.345342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.345372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.345745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.345772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.346172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.346202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.346576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.346604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.346946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.346974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.347212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.347245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.347635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.347665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.348025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.182 [2024-11-19 11:00:30.348053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.182 qpair failed and we were unable to recover it. 00:32:51.182 [2024-11-19 11:00:30.348379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.348408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.348817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.348845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.349194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.349222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.349564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.349592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.349961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.349990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.350384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.350413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.350770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.350798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.351062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.351092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.351481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.351511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.351848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.351884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.352277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.352306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.352680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.352710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.353076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.353104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.353443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.353474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.353830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.353857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.354222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.354252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.354710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.354738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.355093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.355121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.355470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.355500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.355873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.355902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.356260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.356290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.356581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.356609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.356942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.356971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.357379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.357409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.357765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.357793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.358174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.358204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.358553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.358581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.358947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.358975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.359344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.359373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.359748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.359776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.360141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.360187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.360564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.183 [2024-11-19 11:00:30.360593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.183 qpair failed and we were unable to recover it. 00:32:51.183 [2024-11-19 11:00:30.360973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.454 [2024-11-19 11:00:30.361001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.454 qpair failed and we were unable to recover it. 00:32:51.454 [2024-11-19 11:00:30.361259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.454 [2024-11-19 11:00:30.361293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.454 qpair failed and we were unable to recover it. 00:32:51.454 [2024-11-19 11:00:30.361671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.454 [2024-11-19 11:00:30.361699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.454 qpair failed and we were unable to recover it. 00:32:51.454 [2024-11-19 11:00:30.362068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.454 [2024-11-19 11:00:30.362096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.454 qpair failed and we were unable to recover it. 00:32:51.454 [2024-11-19 11:00:30.362446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.454 [2024-11-19 11:00:30.362477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.454 qpair failed and we were unable to recover it. 00:32:51.454 [2024-11-19 11:00:30.362825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.454 [2024-11-19 11:00:30.362853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.454 qpair failed and we were unable to recover it. 00:32:51.454 [2024-11-19 11:00:30.363223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.454 [2024-11-19 11:00:30.363254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.454 qpair failed and we were unable to recover it. 00:32:51.454 [2024-11-19 11:00:30.363624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.454 [2024-11-19 11:00:30.363652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.454 qpair failed and we were unable to recover it. 00:32:51.454 [2024-11-19 11:00:30.364018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.454 [2024-11-19 11:00:30.364046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.454 qpair failed and we were unable to recover it. 00:32:51.454 [2024-11-19 11:00:30.364319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.454 [2024-11-19 11:00:30.364348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.454 qpair failed and we were unable to recover it. 00:32:51.454 [2024-11-19 11:00:30.364791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.454 [2024-11-19 11:00:30.364819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.454 qpair failed and we were unable to recover it. 00:32:51.454 [2024-11-19 11:00:30.365144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.454 [2024-11-19 11:00:30.365182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.454 qpair failed and we were unable to recover it. 00:32:51.454 [2024-11-19 11:00:30.365570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.454 [2024-11-19 11:00:30.365598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.454 qpair failed and we were unable to recover it. 00:32:51.454 [2024-11-19 11:00:30.365969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.454 [2024-11-19 11:00:30.365996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.454 qpair failed and we were unable to recover it. 00:32:51.454 [2024-11-19 11:00:30.366379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.366409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.366769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.366797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.367167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.367197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.367560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.367595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.367933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.367962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.368386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.368415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.368826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.368854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.369219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.369250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.369518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.369547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.369915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.369944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.370215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.370243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.370609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.370637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.370942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.370970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.371321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.371349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.371726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.371754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.372129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.372171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.372571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.372600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.372964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.372993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.373338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.373368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.373732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.373761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.374128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.374156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.374444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.374472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.374869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.374897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.375249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.375279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.375663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.375700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.376076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.376104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.376469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.376498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.376766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.376793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.377168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.377198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.377553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.377582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.377955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.377985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.378374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.378404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.378754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.378782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.379181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.379211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.379573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.379601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.379970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.379998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.380338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.380368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.455 [2024-11-19 11:00:30.380726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.455 [2024-11-19 11:00:30.380755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.455 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.381125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.381152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.381535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.381565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.381934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.381962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.382324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.382353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.382715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.382743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.383084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.383118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.383543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.383572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.383916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.383945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.384292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.384323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.384693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.384721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.384998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.385026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.385382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.385412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.385841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.385868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.386204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.386234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.386596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.386624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.386968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.386997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.387346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.387376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.387729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.387758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.388006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.388037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.388411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.388443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.388797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.388825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.389187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.389217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.389567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.389596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.389957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.389985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.390355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.390385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.390728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.390757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.391117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.391146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.391540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.391569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.391931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.391961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.392335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.392366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.392716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.392746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.393113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.393141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.393542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.393572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.393918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.393948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.456 [2024-11-19 11:00:30.394306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.456 [2024-11-19 11:00:30.394335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.456 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.394698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.394726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.395089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.395117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.395488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.395518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.395878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.395906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.396269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.396299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.396643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.396671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.397023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.397052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.397419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.397448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.397814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.397842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.398207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.398237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.398585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.398619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.398971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.399000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.399391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.399421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.399761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.399788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.400181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.400212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.400568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.400596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.400953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.400981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.401343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.401375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.401717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.401745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.402124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.402153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.402508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.402537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.402899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.402928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.403289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.403319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.403679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.403708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.404062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.404091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.404448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.404478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.404824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.404852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.405213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.405243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.405631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.405659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.406020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.406047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.406413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.406443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.406813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.406841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.407215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.407245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.407501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.407533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.407927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.407956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.408317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.408348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.408703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.457 [2024-11-19 11:00:30.408732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.457 qpair failed and we were unable to recover it. 00:32:51.457 [2024-11-19 11:00:30.409071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.409100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.409462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.409492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.409833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.409862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.410228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.410258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.410612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.410641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.411000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.411029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.411245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.411278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.411632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.411660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.412029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.412057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.412439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.412469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.412822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.412849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.413234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.413264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.413618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.413647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.413993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.414028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.414386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.414417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.414797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.414825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.415199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.415229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.415685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.415713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.416080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.416108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.416462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.416491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.416845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.416875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.417238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.417267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.417511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.417542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.417912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.417941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.418309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.418339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.418680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.418708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.418991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.419019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.419392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.419421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.419778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.419806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.420186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.420216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.420455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.420483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.420739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.420767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.421026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.421053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.421492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.421521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.421862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.421890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.458 qpair failed and we were unable to recover it. 00:32:51.458 [2024-11-19 11:00:30.422238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.458 [2024-11-19 11:00:30.422268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.422692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.422720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.423051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.423080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.423492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.423521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.423876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.423904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.424196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.424226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.424608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.424638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.425006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.425034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.425394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.425422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.425776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.425804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.426182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.426213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.426582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.426609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.426958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.426986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.427363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.427393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.427647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.427678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.428071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.428100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.428484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.428515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.428934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.428963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.429318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.429354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.429779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.429807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.430170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.430200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.430539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.430567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.430929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.430958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.431296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.431327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.431706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.431734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.432108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.432138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.432479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.432510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.432860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.432889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.433246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.433276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.433643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.433672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.434042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.434071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.434443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.434472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.459 [2024-11-19 11:00:30.434830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.459 [2024-11-19 11:00:30.434859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.459 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.435097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.435129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.435375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.435405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.435765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.435794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.436166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.436198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.436529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.436557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.436931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.436959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.437323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.437353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.437715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.437744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.438104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.438133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.438584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.438614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.438976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.439005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.439345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.439374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.439743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.439772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.440214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.440245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.440501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.440532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.440869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.440898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.441271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.441302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.441670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.441697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.442061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.442089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.442463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.442493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.442864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.442893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.443277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.443306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.443679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.443706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.444064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.444093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.444441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.444471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.444845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.444879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.445233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.445263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.445633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.445661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.446025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.446052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.446414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.446445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.446787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.446816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.447153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.447191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.447457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.447485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.447824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.447853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.448233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.460 [2024-11-19 11:00:30.448263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.460 qpair failed and we were unable to recover it. 00:32:51.460 [2024-11-19 11:00:30.448643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.448671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.449011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.449040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.449391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.449421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.449787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.449815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.450180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.450210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.450578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.450606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.450982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.451010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.451355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.451384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.451742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.451770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.452119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.452147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.452517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.452545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.452794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.452826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.453182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.453212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.453574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.453602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.453968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.453997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.454359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.454389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.454789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.454817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.455184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.455214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.455586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.455614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.455983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.456011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.456423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.456453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.456810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.456837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.457186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.457215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.457572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.457599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.458026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.458054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.458426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.458455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.458811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.458839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.459203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.459233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.459495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.459526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.459880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.459909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.460280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.460317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.460684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.460714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.461080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.461109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.461494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.461525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.461875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.461904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.462287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.462316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.462594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.462622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.461 [2024-11-19 11:00:30.462986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.461 [2024-11-19 11:00:30.463014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.461 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.463373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.463402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.463760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.463796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.464075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.464103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.464453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.464484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.464877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.464906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.465246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.465275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.465516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.465545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.465817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.465845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.466212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.466243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.466636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.466664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.467035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.467063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.467479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.467508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.467776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.467805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.468133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.468177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.468557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.468586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.468997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.469025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.469452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.469482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.469839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.469868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.470215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.470245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.470502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.470532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.470954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.470983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.471319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.471350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.471716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.471744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.472088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.472117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.472368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.472398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.472797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.472826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.473178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.473209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.473591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.473620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.473986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.474014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.474396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.474426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.474792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.474821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.475192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.475221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.475655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.475689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.476045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.476072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.476451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.476481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.476851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.476879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.477255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.462 [2024-11-19 11:00:30.477285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.462 qpair failed and we were unable to recover it. 00:32:51.462 [2024-11-19 11:00:30.477643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.477672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.478031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.478059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.478428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.478457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.478811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.478840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.479189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.479218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.479617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.479646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.480013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.480043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.480388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.480418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.480648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.480676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.481033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.481064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.481412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.481441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.481774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.481804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.482176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.482207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.482589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.482617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.482860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.482891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.483240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.483270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.483625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.483654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.484008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.484037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.484286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.484318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.484675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.484704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.485044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.485073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.485409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.485438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.485803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.485832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.486090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.486118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.486531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.486561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.486925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.486954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.487341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.487371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.487720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.487749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.488112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.488140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.488511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.488541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.488905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.488933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.489180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.489210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.489613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.489642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.489961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.489988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.490337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.490366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.463 [2024-11-19 11:00:30.490739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.463 [2024-11-19 11:00:30.490775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.463 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.491137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.491175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.491557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.491585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.491818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.491846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.492203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.492234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.492481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.492509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.492877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.492905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.493244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.493273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.493522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.493555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.493932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.493961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.494304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.494334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.494678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.494707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.495057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.495085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.495457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.495487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.495860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.495887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.496277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.496307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.496687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.496716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.497062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.497090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.497349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.497378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.497740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.497768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.498023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.498051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.498445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.498475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.498821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.498850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.499083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.499111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.499453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.499482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.499845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.499874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.500229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.500259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.500618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.500647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.500999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.501028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.501433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.501463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.501825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.501853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.502221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.502251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.502619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.502649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.503040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.503069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.503451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.503481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.464 qpair failed and we were unable to recover it. 00:32:51.464 [2024-11-19 11:00:30.503831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.464 [2024-11-19 11:00:30.503859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.504214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.504244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.504620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.504647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.504903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.504932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.505357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.505387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.505758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.505793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.506138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.506175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.506541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.506569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.506803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.506831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.507191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.507220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.507457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.507487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.507775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.507804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.508190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.508222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.508626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.508656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.509030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.509060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.509414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.509443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.509819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.509847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.510228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.510258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.510607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.510635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.511003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.511033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.511266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.511295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.511663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.511690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.512059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.512087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.512435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.512466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.512844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.512871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.513254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.513284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.513667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.513695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.514043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.514071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.514420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.514449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.514853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.514881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.515206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.515236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.515636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.515664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.516035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.516065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.516356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.516386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.516776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.516804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.517152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.517199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.517547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.517576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.517942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.465 [2024-11-19 11:00:30.517970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.465 qpair failed and we were unable to recover it. 00:32:51.465 [2024-11-19 11:00:30.518341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.518372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.518724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.518754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.519126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.519153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.519429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.519457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.519786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.519815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.520199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.520229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.520618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.520646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.521024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.521052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.521408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.521439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.521839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.521867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.522213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.522242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.522608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.522636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.523009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.523037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.523305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.523334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.523705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.523732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.524003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.524030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.524272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.524301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.524674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.524703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.525076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.525105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.525344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.525373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.525736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.525765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.526128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.526167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.526539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.526569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.526929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.526957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.527219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.527249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.527516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.527543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.527891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.527918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.528276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.528307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.528669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.528697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.529060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.529088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.529511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.529540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.529902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.529930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.530300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.530330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.530555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.530586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.530938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.530973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.531320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.531351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.531600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.531632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.466 [2024-11-19 11:00:30.531892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.466 [2024-11-19 11:00:30.531923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.466 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.532279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.532310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.532652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.532680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.533128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.533156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.533541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.533571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.533938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.533966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.534187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.534217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.534605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.534633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.534992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.535020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.535471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.535501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.535865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.535892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.536236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.536266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.536631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.536659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.537022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.537050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.537416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.537445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.537784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.537813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.538186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.538215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.538635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.538663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.538925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.538952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.539230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.539260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.539626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.539654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.540038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.540065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.540448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.540477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.540837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.540864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.541230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.541259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.541609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.541638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.542007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.542035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.542340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.542370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.542735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.542763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.543125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.543153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.543558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.543587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.543957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.543985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.544328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.544359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.544727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.544755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.545117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.545145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.545520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.545550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.545801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.545829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.546179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.546214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.467 qpair failed and we were unable to recover it. 00:32:51.467 [2024-11-19 11:00:30.546586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.467 [2024-11-19 11:00:30.546614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.546954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.546982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.547343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.547372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.547716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.547745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.548116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.548145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.548490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.548520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.548881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.548909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.549240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.549271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.549648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.549676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.550150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.550188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.550544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.550572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.550838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.550865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.551218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.551247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.551613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.551642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.552005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.552032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.552397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.552427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.552792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.552821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.553185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.553216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.553603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.553631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.553934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.553962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.554322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.554351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.554710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.554737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.555141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.555177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.555517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.555546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.555917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.555944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.556321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.556351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.556785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.556815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.557181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.557212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.557550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.557579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.557947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.557976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.558339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.558369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.558736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.558765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.559122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.559150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.559507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.468 [2024-11-19 11:00:30.559536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.468 qpair failed and we were unable to recover it. 00:32:51.468 [2024-11-19 11:00:30.559878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.559908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.560183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.560212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.560568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.560599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.560975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.561003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.561396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.561425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.561787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.561821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.562224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.562255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.562592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.562622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.562997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.563025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.563358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.563386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.563766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.563795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.564149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.564187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.564586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.564614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.564853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.564885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.565220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.565249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.565597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.565626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.565987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.566016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.566357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.566388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.566641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.566672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.567033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.567062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.567410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.567441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.567816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.567844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.568208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.568238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.568608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.568636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.569006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.569035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.569407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.569436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.569797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.569825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.570194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.570224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.570606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.570635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.570993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.571021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.571396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.571426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.571782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.571811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.572039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.572070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.572430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.572460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.572821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.572850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.573118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.573146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.573479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.573508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.469 [2024-11-19 11:00:30.573869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.469 [2024-11-19 11:00:30.573899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.469 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.574177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.574208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.574580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.574607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.574978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.575006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.575457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.575488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.575753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.575780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.576134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.576169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.576534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.576562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.576778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.576815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.577179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.577209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.577576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.577604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.577968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.577995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.578254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.578284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.578631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.578660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.578984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.579012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.579255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.579287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.579659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.579688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.580049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.580076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.580458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.580488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.580847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.580876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.581143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.581179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.581533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.581561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.581904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.581933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.582281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.582310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.582657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.582686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.583128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.583156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.583559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.583588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.583966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.583995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.584250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.584279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.584621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.584657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.584999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.585027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.585388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.585417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.585797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.585825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.586194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.586224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.586464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.586494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.586857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.586886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.587299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.587328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.470 [2024-11-19 11:00:30.587660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.470 [2024-11-19 11:00:30.587688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.470 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.588063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.588091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.588465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.588495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.588860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.588888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.589256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.589285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.589649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.589678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.590054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.590081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.590246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.590276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.590612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.590641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.590988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.591017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.591385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.591416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.591755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.591791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.592130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.592168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.592506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.592536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.592794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.592822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.593231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.593261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.593631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.593659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.594013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.594041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.594491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.594521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.594898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.594927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.595279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.595308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.595559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.595587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.596032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.596060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.596402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.596433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.596686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.596714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.597067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.597096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.597459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.597490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.597725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.597757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.598107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.598137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.598508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.598538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.598779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.598807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.599205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.599235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.599604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.599632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.600004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.600032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.600425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.600455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.600818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.600846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.601115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.601144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.601484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.601512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.471 qpair failed and we were unable to recover it. 00:32:51.471 [2024-11-19 11:00:30.601880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.471 [2024-11-19 11:00:30.601909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.602337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.602368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.602600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.602628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.603005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.603035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.603383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.603413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.603792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.603820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.604183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.604212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.604583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.604611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.604998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.605026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.605384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.605414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.605773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.605802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.606200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.606230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.606608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.606636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.606896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.606931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.607273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.607302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.607669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.607697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.608037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.608066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.608339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.608368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.608738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.608766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.609121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.609151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.609407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.609436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.609796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.609825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.610061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.610089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.610351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.610381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.610729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.610758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.611156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.611193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.611628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.611656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.612061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.612090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.612457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.612486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.612855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.612883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.613233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.613263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.613615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.613644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.614028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.614058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.614429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.614460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.614807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.614838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.615214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.615244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.615565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.615600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.615980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.472 [2024-11-19 11:00:30.616009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.472 qpair failed and we were unable to recover it. 00:32:51.472 [2024-11-19 11:00:30.616377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.616409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.616673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.616700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.616951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.616979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.617359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.617389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.617756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.617784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.618154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.618193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.618515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.618544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.618908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.618937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.619376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.619406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.619803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.619831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.620227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.620257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.620657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.620685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.621045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.621075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.621298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.621330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.621757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.621786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.622178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.622214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.622540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.622568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.622938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.622966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.623335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.623366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.623818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.623846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.624218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.624247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.624609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.624638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.624894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.624923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.625183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.625212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.625556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.625584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.625954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.625982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.626356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.626385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.626745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.626774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.627027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.627056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.627412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.627442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.627793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.627822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.628168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.628197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.628535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.628565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.628915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.473 [2024-11-19 11:00:30.628944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.473 qpair failed and we were unable to recover it. 00:32:51.473 [2024-11-19 11:00:30.629305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.629333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.629712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.629741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.630104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.630134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.630517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.630547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.630843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.630872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.631238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.631268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.631652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.631681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.632039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.632067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.632332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.632362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.632617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.632645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.633010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.633038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.633388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.633417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.633773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.633801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.634178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.634208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.634565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.634593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.634941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.634969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.635335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.635365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.635713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.635742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.636104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.636133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.636424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.636454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.636807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.636837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.637216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.637251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.637641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.637670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.638017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.638045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.638430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.638460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.638869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.638898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.474 [2024-11-19 11:00:30.639193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.474 [2024-11-19 11:00:30.639223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.474 qpair failed and we were unable to recover it. 00:32:51.749 [2024-11-19 11:00:30.639588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.749 [2024-11-19 11:00:30.639619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.749 qpair failed and we were unable to recover it. 00:32:51.749 [2024-11-19 11:00:30.639981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.749 [2024-11-19 11:00:30.640010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.749 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.640386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.640416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.640786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.640814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.641263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.641292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.641629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.641658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.642037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.642067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.642290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.642322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.642725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.642755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.643090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.643119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.643508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.643538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.643886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.643915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.644283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.644312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.644648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.644678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.644917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.644946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.645361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.645390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.645749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.645778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.646146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.646191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.646591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.646619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.646978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.647007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.647384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.647413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.647796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.647826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.648259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.648289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.648657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.648684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.649030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.649057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.649406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.649436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.649795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.649822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.650183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.650213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.650582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.650610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.650858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.650885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.651173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.651203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.651540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.651568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.651958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.651986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.652369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.652401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.652759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.652793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.653034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.653061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.653414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.653443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.653794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.653824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.654198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.750 [2024-11-19 11:00:30.654227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.750 qpair failed and we were unable to recover it. 00:32:51.750 [2024-11-19 11:00:30.654608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.654637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.654995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.655023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.655419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.655449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.655816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.655844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.656206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.656237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.656673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.656702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.657050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.657079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.657437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.657466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.657715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.657743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.658156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.658193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.658553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.658581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.658950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.658978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.659360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.659390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.659740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.659768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.660010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.660038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.660295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.660328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.660716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.660744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.661104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.661133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.661483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.661511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.661868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.661898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.662238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.662268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.662636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.662665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.663012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.663042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.663390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.663420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.663757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.663787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.664155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.664195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.664589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.664617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.664992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.665020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.665399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.665429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.665786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.665814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.666195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.666224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.666573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.666602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.666969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.666998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.667343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.667372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.667733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.667763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.668123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.668157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.668512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.668541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.751 qpair failed and we were unable to recover it. 00:32:51.751 [2024-11-19 11:00:30.668913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.751 [2024-11-19 11:00:30.668940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.669313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.669343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.669751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.669779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.670124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.670151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.670510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.670538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.670901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.670931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.671183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.671214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.671606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.671634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.671990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.672017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.672390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.672419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.672778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.672806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.673169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.673198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.673566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.673594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.673941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.673970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.674332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.674363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.674613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.674645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.675009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.675037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.675401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.675431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.675731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.675760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.676125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.676153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.676505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.676535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.676897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.676927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.677299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.677328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.677579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.677609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.677969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.677998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.678354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.678386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.678719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.678749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.679115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.679144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.679404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.679437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.679804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.679835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.680228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.680258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.680622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.680651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.681014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.681042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.752 [2024-11-19 11:00:30.681400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.752 [2024-11-19 11:00:30.681432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.752 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.681811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.681839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.682275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.682308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.682666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.682694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.683065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.683093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.683459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.683495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.683724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.683752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.683992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.684023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.684396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.684426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.684678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.684709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.685063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.685092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.685347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.685380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.685735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.685763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.686204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.686234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.686605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.686633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.686969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.686997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.687371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.687402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.687776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.687804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.688143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.688178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.688567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.688596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.688847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.688875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.689232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.689262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.689607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.689636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.690007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.690035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.690396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.690427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.690805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.690832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.691186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.691216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.691564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.691592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.691955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.691983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.692234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.692264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.692639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.692667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.693054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.693082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.693425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.693455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.693709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.693739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.694089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.694117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.694487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.694517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.694877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.694906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.753 qpair failed and we were unable to recover it. 00:32:51.753 [2024-11-19 11:00:30.695272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.753 [2024-11-19 11:00:30.695302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.695548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.695576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.695841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.695869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.696219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.696248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.696621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.696649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.697016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.697044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.697383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.697413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.697778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.697806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.698174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.698210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.698375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.698403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.698782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.698810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.699047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.699078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.699453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.699483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.699744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.699773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.700133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.700168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.700524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.700552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.700882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.700910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.701284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.701313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.701672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.701700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.702054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.702082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.702447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.702476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.702711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.702738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.703145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.703186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.703514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.703545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.703906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.703934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.704306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.704337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.704690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.704718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.704979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.705006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.705360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.705398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.705771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.705799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.706166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.706196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.706430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.706459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.754 [2024-11-19 11:00:30.706832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.754 [2024-11-19 11:00:30.706864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.754 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.707221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.707250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.707615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.707652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.708022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.708051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.708294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.708323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.708759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.708792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.709129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.709167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.709532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.709561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.709927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.709956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.710394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.710424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.710756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.710784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.711154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.711193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.711589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.711617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.711985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.712013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.712390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.712420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.712778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.712806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.713165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.713203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.713472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.713500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.713851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.713879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.714133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.714185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.714545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.714575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.714928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.714957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.715286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.715316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.715735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.715764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.716150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.716191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.716603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.716631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.717050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.717078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.717518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.717548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.717903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.717934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.718285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.718314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.718721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.718750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.719005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.719033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.719306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.719336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.719694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.719721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.720078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.720106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.720483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.720514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.720763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.720791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.721179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.721210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.755 [2024-11-19 11:00:30.721629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.755 [2024-11-19 11:00:30.721657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.755 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.722002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.722030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.722430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.722461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.722901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.722929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.723186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.723216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.723602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.723636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.723992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.724021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.724293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.724322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.724652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.724679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.725074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.725102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.725491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.725523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.725873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.725901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.726258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.726288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.726653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.726682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.727054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.727082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.727249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.727279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.727561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.727590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.727970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.727998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.728256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.728290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.728670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.728701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.729065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.729093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.729249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.729280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.729664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.729693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.730059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.730088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.730470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.730500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.730864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.730893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.731217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.731247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.731620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.731648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.732016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.732045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.732388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.732418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.732796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.732825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.732967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.732993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.733235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.756 [2024-11-19 11:00:30.733265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.756 qpair failed and we were unable to recover it. 00:32:51.756 [2024-11-19 11:00:30.733499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.733532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.733788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.733820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.734078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.734107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.734471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.734502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.734745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.734774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.735109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.735138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.735519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.735551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.735788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.735816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.736081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.736113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.736403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.736437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.736788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.736817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.737084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.737112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.737383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.737421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.737778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.737808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.738173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.738203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.738436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.738464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.738807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.738838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.739182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.739212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.739630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.739658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.740008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.740038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.740419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.740450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.740815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.740843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.741210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.741240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.741614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.741644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.741998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.742029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.742193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.742222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.742592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.742620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.742978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.743007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.743257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.743291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.743650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.743679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.744061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.744089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.744463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.744493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.744858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.744886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.745017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.745043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.745384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.745420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.745833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.745862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.746236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.746266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.746544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.757 [2024-11-19 11:00:30.746574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.757 qpair failed and we were unable to recover it. 00:32:51.757 [2024-11-19 11:00:30.747006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.747037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.747393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.747424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.747643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.747671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.748029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.748058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.748321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.748350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.748548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.748576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.748968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.748998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.749382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.749412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.749654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.749682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.750051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.750081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.750344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.750372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.750704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.750732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.751113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.751142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.751496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.751524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.751886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.751922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.752273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.752303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.752651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.752681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.753044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.753072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.753410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.753441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.753823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.753851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.754195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.754227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.754593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.754622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.755012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.755040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.755390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.755419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.755803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.755833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.756190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.756221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.756597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.756625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.757026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.757056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.757414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.757445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.757804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.757833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.758225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.758257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.758608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.758638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.758981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.759010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.759415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.759445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.759902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.759931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.760307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.760338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.760613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.760641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.758 [2024-11-19 11:00:30.760868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.758 [2024-11-19 11:00:30.760895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.758 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.761317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.761347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.761691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.761720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.762089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.762119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.762402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.762432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.762786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.762815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.763226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.763256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.763620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.763651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.764004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.764033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.764426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.764457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.764799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.764829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.765180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.765210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.765448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.765476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.765862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.765894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.766275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.766305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.766572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.766603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.766961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.766991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.767358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.767396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.767765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.767794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.768146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.768185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.768528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.768557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.768951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.768979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.769343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.769372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.769736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.769764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.770005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.770032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.770401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.770430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.770793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.770823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.771180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.771213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.771585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.771613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.771976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.772005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.772374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.772403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.772757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.772786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.773037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.759 [2024-11-19 11:00:30.773068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.759 qpair failed and we were unable to recover it. 00:32:51.759 [2024-11-19 11:00:30.773453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.773485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.773853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.773881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.774244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.774273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.774633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.774663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.775015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.775044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.775426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.775456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.775888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.775917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.776277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.776306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.776673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.776701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.777056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.777084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.777471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.777501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.777841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.777870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.778206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.778238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.778583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.778612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.778845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.778874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.779135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.779174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.779526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.779555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.779962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.779991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.780346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.780374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.780741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.780768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.781115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.781145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.781515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.781544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.781911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.781938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.782310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.782340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.782706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.782747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.782971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.783000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.783265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.783294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.783640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.783668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.784029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.784057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.760 [2024-11-19 11:00:30.784398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.760 [2024-11-19 11:00:30.784428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.760 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.784769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.784797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.785166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.785197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.785449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.785481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.785819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.785850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.786196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.786226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.786531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.786559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.786816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.786845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.787228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.787257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.787611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.787653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.787998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.788027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.788387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.788417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.788777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.788805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.789146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.789196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.789581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.789610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.789978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.790006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.790356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.790385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.790764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.790792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.791260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.791290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.791650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.791678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.792012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.792040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.792387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.792418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.792779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.792808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.793180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.793211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.793535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.793564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.793908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.793938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.794307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.794338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.794697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.794725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.795086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.795117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.795552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.795583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.795837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.795868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.796270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.796300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.796665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.796696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.797056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.797083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.797443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.797473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.797706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.797742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.798097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.798125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.798494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.761 [2024-11-19 11:00:30.798524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.761 qpair failed and we were unable to recover it. 00:32:51.761 [2024-11-19 11:00:30.798884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.798913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.799256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.799287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.799656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.799685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.800133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.800169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.800534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.800563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.800904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.800934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.801282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.801312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.801737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.801767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.802181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.802212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.802578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.802606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.802950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.802978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.803335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.803365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.803729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.803759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.804130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.804165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.804530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.804558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.804813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.804845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.805225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.805256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.805535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.805565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.805897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.805926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.806294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.806324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.806568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.806596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.806966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.806994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.807368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.807400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.807735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.807765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.808178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.808210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.808579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.808612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.808959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.808987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.809349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.809379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.809739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.809769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.810134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.810172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.810538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.810566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.810951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.810979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.811332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.811362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.811724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.811752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.812127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.812156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.812535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.812564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.812903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.812932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.813298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.762 [2024-11-19 11:00:30.813335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.762 qpair failed and we were unable to recover it. 00:32:51.762 [2024-11-19 11:00:30.813677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.813707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.814078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.814106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.814491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.814520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.814768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.814799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.815036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.815069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.815451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.815482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.815850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.815878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.816227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.816256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.816615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.816643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.816979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.817007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.817400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.817429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.817785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.817816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.818213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.818242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.818617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.818647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.818990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.819019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.819387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.819419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.819763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.819791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.820052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.820081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.820422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.820451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.820798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.820828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.821195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.821225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.821679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.821708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.822034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.822064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.822473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.822502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.822852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.822882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.823235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.823266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.823519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.823547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.823997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.824025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.824372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.824402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.824705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.824732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.825088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.825117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.825468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.825499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.825739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.825770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.826180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.826211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.826548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.826577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.826758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.826789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.763 qpair failed and we were unable to recover it. 00:32:51.763 [2024-11-19 11:00:30.827141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.763 [2024-11-19 11:00:30.827182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.827445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.827473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.827821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.827849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.828181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.828220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.828621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.828650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.829034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.829062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.829441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.829471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.829830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.829858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.830260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.830289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.830644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.830672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.831051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.831085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.831445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.831474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.831814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.831843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.832213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.832243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.832589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.832618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.832977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.833006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.833405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.833435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.833804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.833832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.834188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.834218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.834557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.834594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.834977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.835006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.835357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.835387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.835756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.835783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.836151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.836205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.836488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.836517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.836904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.836932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.837300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.837330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.837692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.837721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.838088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.838116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.838550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.838581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.838926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.838955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.839314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.839344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.839684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.764 [2024-11-19 11:00:30.839712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.764 qpair failed and we were unable to recover it. 00:32:51.764 [2024-11-19 11:00:30.840044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.840073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.840466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.840495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.840864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.840894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.841253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.841284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.841652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.841680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.842037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.842066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.842414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.842443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.842787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.842815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.843179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.843209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.843573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.843602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.843970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.844003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.844274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.844303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.844676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.844706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.845090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.845121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.845553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.845584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.845930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.845959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.846330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.846360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.846718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.846746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.847122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.847151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.847514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.847543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.847914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.847943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.848311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.848340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.848580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.848611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.848874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.848903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.849293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.849323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.849681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.849709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.850053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.850083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.850433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.850462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.850820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.850848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.851193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.851223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.851585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.851614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.852007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.852035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.852420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.852449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.852819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.852846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.853268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.853298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.853667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.853695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.854050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.854078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-19 11:00:30.854457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.765 [2024-11-19 11:00:30.854489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.854831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.854860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.855227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.855256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.855625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.855653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.856012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.856040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.856444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.856474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.856842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.856872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.857237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.857266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.857635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.857663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.858072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.858100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.858509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.858540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.858909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.858937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.859308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.859338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.859697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.859737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.860099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.860126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.860491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.860521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.860883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.860912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.861258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.861289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.861660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.861688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.861816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.861846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.862291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.862321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.862691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.862722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.863076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.863105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.863472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.863501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.863859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.863887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.864239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.864270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.864660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.864688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.865049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.865078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.865425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.865455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.865815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.865843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.866204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.866233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.866627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.866654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.867003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.867031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.867391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.867422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.867753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.867781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.868168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.868198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.868578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.868607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.868866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.868893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-19 11:00:30.869260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.766 [2024-11-19 11:00:30.869290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.869652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.869681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.870044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.870073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.870443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.870473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.870852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.870881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.871139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.871181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.871541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.871570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.871932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.871962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.872316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.872345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.872712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.872740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.873149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.873200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.873543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.873572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.873952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.873981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.874345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.874376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.874741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.874769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.875155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.875199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.875559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.875589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.875953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.875981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.876324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.876355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.876720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.876748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.876991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.877022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.877301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.877331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.877682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.877710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.878138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.878177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.878580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.878609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.878969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.878999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.879341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.879371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.879738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.879767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.880135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.880170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.880530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.880559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.880899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.880938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.881276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.881306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.881646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.881675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.882044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.882073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.882417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.882450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.882807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.882836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.883228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.883260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.883619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.767 [2024-11-19 11:00:30.883648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-19 11:00:30.884023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.884051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.884412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.884441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.884783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.884810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.885194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.885224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.885608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.885638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.886000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.886028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.887960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.888027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.888410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.888447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.888873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.888903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.889287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.889316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.889668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.889697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.890057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.890086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.890539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.890568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.890958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.890987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.891332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.891362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.891606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.891640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.891987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.892016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.892370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.892410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.892765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.892794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.893151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.893189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.893569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.893598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.893847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.893876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.894240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.894270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.894632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.894660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.895043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.895073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.895410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.895441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.895679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.895707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.896080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.896108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.896452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.896481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.896833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.896863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.897214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.897245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.897650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.897679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.898032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.898061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.768 qpair failed and we were unable to recover it. 00:32:51.768 [2024-11-19 11:00:30.898408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.768 [2024-11-19 11:00:30.898437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.898793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.898823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.899187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.899218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.899584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.899613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.899970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.899998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.900336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.900366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.900725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.900754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.901098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.901128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.901411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.901441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.901687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.901718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.902069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.902098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.902474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.902504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.902860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.902890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.903035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.903064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.903362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.903393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.903737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.903769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.904133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.904170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.904508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.904538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.904899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.904928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.905268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.905303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.905734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.905763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.906091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.906121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.906571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.906601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.906961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.906990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.907342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.907378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.907751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.907781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.908153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.908192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.908452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.908483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.908848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.908877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.909235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.909266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.909635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.909663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.910049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.910079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.910454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.910484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.910857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.910885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.911274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.911305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.911739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.911769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.912094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.912121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.912520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.769 [2024-11-19 11:00:30.912550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.769 qpair failed and we were unable to recover it. 00:32:51.769 [2024-11-19 11:00:30.912907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.912937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.913283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.913313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.913691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.913719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.914100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.914128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.914497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.914527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.914908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.914937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.915377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.915409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.915734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.915762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.916108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.916136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.916417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.916446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.916785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.916814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.917142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.917179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.917512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.917542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.917902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.917936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.918273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.918304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.918702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.918730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.919097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.919127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.919529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.919560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.919932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.919961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.920341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.920371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.920701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.920730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.921099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.921127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.921516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.921548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.921802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.921830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.922181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.922213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.922618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.922647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.923021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.923050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.923337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.923366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.923740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.923768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.924135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.924174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.925943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.926007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.926412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.926449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.926823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.926853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.927228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.927259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.927675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.927703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.928117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.928145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.770 [2024-11-19 11:00:30.928492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.770 [2024-11-19 11:00:30.928521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.770 qpair failed and we were unable to recover it. 00:32:51.771 [2024-11-19 11:00:30.928866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.771 [2024-11-19 11:00:30.928895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.771 qpair failed and we were unable to recover it. 00:32:51.771 [2024-11-19 11:00:30.929258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.771 [2024-11-19 11:00:30.929288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.771 qpair failed and we were unable to recover it. 00:32:51.771 [2024-11-19 11:00:30.929648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.771 [2024-11-19 11:00:30.929676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:51.771 qpair failed and we were unable to recover it. 00:32:52.046 [2024-11-19 11:00:30.930048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.930080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.930429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.930461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.930815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.930845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.931199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.931230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.932922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.932979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.933337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.933370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.933735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.933765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.934106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.934135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.934513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.934543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.934961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.934989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.935367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.935398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.935769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.935797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.936099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.936128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.936408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.936449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.936814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.936842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.937220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.937250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.937619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.937648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.938013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.938040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.938313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.938342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.938727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.938756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.939123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.939152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.939539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.939568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.939831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.939863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.940234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.940265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.940671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.940700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.940966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.940994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.941344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.941373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.941717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.941747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.942118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.942148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.942552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.047 [2024-11-19 11:00:30.942583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.047 qpair failed and we were unable to recover it. 00:32:52.047 [2024-11-19 11:00:30.942952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.942979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.943365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.943395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.943749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.943778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.944039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.944067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.944443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.944473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.944859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.944888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.945315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.945345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.945753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.945782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.946064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.946093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.946465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.946499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.946870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.946900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.947181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.947212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.947586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.947614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.947973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.948002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.948393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.948423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.948782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.948810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.949179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.949208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.949480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.949508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.949861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.949889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.950234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.950264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.950631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.950659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.951019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.951048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.951426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.951457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.951797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.951831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.952192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.952222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.952590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.952618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.952984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.953013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.953351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.953381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.953738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.953766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.954124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.954152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.954471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.954500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.954871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.954900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.048 qpair failed and we were unable to recover it. 00:32:52.048 [2024-11-19 11:00:30.955266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.048 [2024-11-19 11:00:30.955297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.955664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.955692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.956058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.956086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.956457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.956487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.956899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.956927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.957257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.957289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.957669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.957698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.958062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.958092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.958447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.958476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.958829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.958857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.959262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.959292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.959707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.959736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.960094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.960123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.960528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.960558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.960917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.960946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.961207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.961240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.961581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.961611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.961985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.962015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.962269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.962302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.962592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.962619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.962980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.963008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.963378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.963409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.963778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.963806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.964223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.964254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.964656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.964685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.965038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.965066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.965421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.965451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.965807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.965835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.966199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.966230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.966464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.966492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.049 [2024-11-19 11:00:30.966728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.049 [2024-11-19 11:00:30.966755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.049 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.967139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.967183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.967526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.967555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.967918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.967945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.968181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.968214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.968564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.968593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.969031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.969060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.969499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.969529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.969870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.969899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.970154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.970201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.970567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.970596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.970953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.970984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.971244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.971274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.971497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.971528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.971900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.971929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.972299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.972328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.972710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.972738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.973122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.973151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.973539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.973567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.973939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.973969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.974317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.974348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.974715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.974744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.975113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.975143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.975528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.975558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.975929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.975958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.976321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.976351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.976739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.976768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.977237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.977267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.977705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.977735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.978022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.978052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.978410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.978441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.978788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.050 [2024-11-19 11:00:30.978816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.050 qpair failed and we were unable to recover it. 00:32:52.050 [2024-11-19 11:00:30.979072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.979100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.979517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.979547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.979792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.979821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.980191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.980222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.980636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.980664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.981016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.981044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.981379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.981409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.981765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.981793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.982149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.982187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.982462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.982498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.982835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.982864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.983234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.983264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.983606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.983635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.984064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.984092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.984341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.984371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.984748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.984777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.985023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.985051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.985407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.985436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.985681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.985709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.986060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.986088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.986425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.986454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.986918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.986947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.987305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.987337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.987708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.987737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.988104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.988133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.988501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.988530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.988927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.988956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.989329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.989359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.989712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.989740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.990099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.990130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.051 [2024-11-19 11:00:30.990468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.051 [2024-11-19 11:00:30.990499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.051 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.990911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.990941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.991312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.991343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.991702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.991731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.992112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.992140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.992385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.992414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.992675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.992708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.993095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.993124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.993553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.993584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.993994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.994024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.994260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.994290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.994661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.994691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.995073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.995101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.995540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.995571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.995857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.995885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.996237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.996268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.996630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.996659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.997047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.997075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.997440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.997470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.997714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.997748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.998093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.998123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.998484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.998515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.998810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.998837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.999197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.999226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.999582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.999611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:30.999877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:30.999909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:31.000183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:31.000214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:31.000564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:31.000593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:31.000950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:31.000979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:31.001360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.052 [2024-11-19 11:00:31.001392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.052 qpair failed and we were unable to recover it. 00:32:52.052 [2024-11-19 11:00:31.001771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.001802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.002126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.002156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.002546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.002576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.002830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.002862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.003233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.003264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.003502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.003530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.003855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.003884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.004323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.004354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.004698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.004728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.005108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.005137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.005390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.005420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.005791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.005818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.006176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.006207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.006509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.006540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.006779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.006808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.007173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.007204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.007576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.007605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.007954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.007984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.008229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.008259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.008637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.008667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.009050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.009081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.009329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.009359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.009699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.009728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.010081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.010111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.010485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.010515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.010885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.010915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.011293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.011322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.011762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.011792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.012135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.012175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.012409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.012447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.012828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.012856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.013294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.013325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.013757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.053 [2024-11-19 11:00:31.013787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.053 qpair failed and we were unable to recover it. 00:32:52.053 [2024-11-19 11:00:31.014150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.014187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.015879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.015937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.016235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.016267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.016596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.016627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.016968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.016997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.017349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.017381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.017622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.017651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.018008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.018038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.018424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.018456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.018820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.018850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.019207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.019238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.019614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.019642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.020010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.020039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.020394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.020425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.020791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.020820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.021267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.021299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.021658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.021687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.022015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.022046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.022391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.022422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.022758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.022789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.023153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.023194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.023563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.023593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.023940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.023969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.024229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.024264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.024516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.024547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.024938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.024968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.025321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.025353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.025643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.025673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.026038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.026067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.054 [2024-11-19 11:00:31.026528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.054 [2024-11-19 11:00:31.026559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.054 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.026914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.026942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.027294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.027325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.027692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.027722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.028082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.028110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.028444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.028475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.028760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.028790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.029132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.029178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.029525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.029556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.029921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.029949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.030324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.030355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.030733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.030763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.031103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.031131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.031505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.031537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.031900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.031931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.032292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.032324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.032669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.032699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.033063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.033092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.033414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.033446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.033830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.033860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.034200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.034232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.034598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.034628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.034957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.034987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.035338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.035370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.035776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.035806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.036152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.036211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.036630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.036660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.037012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.037045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.055 qpair failed and we were unable to recover it. 00:32:52.055 [2024-11-19 11:00:31.037414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.055 [2024-11-19 11:00:31.037445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.037795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.037824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.038201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.038232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.038615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.038645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.039001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.039031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.039435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.039466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.039842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.039871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.040225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.040256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.040604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.040634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.041009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.041039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.041388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.041419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.041778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.041806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.042059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.042092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.042464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.042494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.042858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.042888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.043235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.043264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.043618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.043646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.044007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.044036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.044417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.044446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.044832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.044868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.045131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.045167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.045557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.045586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.045945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.045973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.046227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.046256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.046639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.046668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.047038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.047067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.047417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.047448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.047798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.047826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.048191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.048221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.048570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.048600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.048967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.048994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.049359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.056 [2024-11-19 11:00:31.049389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.056 qpair failed and we were unable to recover it. 00:32:52.056 [2024-11-19 11:00:31.049765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.049794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.050043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.050071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.050409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.050440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.050784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.050814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.051165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.051196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.051548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.051577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.051923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.051952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.052335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.052366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.052650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.052678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.052909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.052941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.053297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.053327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.053703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.053732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.054098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.054128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.054494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.054524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.054951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.054980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.055149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.055194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.055544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.055573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.055946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.055975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.056321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.056351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.056715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.056745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.057108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.057138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.057502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.057531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.057906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.057935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.058319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.058348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.058712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.058741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.059100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.059128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.059583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.059613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.059973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.060006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.060377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.060407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.060657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.060685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.061085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.057 [2024-11-19 11:00:31.061115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.057 qpair failed and we were unable to recover it. 00:32:52.057 [2024-11-19 11:00:31.061475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.061506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.061875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.061903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.062243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.062273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.062664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.062692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.063056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.063087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.063470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.063500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.063856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.063885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.064244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.064274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.064644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.064673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.065055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.065084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.065468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.065499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.065861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.065889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.066266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.066297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.066641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.066671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.066913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.066945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.067219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.067248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.067616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.067646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.067945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.067973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.068322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.068353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.068690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.068719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.069060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.069089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.069483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.069513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.069873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.069901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.070267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.070297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.070651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.070681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.071054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.071083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.071334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.071363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.058 [2024-11-19 11:00:31.071713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.058 [2024-11-19 11:00:31.071741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.058 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.072099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.072129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.072535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.072565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.072923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.072951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.073201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.073235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.073618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.073646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.074002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.074030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.074405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.074435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.074785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.074814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.075179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.075215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.075651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.075679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.076032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.076060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.076419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.076449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.076811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.076840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.077190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.077220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.077600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.077629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.077977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.078006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.078348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.078379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.078751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.078780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.079139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.079182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.079574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.079603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.079961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.079990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.080347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.080377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.080746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.080776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.081049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.081077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.081464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.081494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.081849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.081878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.082140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.082181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.082569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.082597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.082939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.082969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.083344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.083375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.083690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.083718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.084097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.084126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.084559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.084589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.059 [2024-11-19 11:00:31.084951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.059 [2024-11-19 11:00:31.084980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.059 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.085338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.085368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.085723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.085754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.086122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.086151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.086388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.086421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.086800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.086829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.087193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.087223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.087588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.087616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.087985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.088016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.088274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.088304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.088646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.088675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.088961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.088991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.089355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.089384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.089779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.089807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.090239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.090269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.090635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.090677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.091021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.091049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.091450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.091480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.091843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.091873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.092249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.092278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.092644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.092674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.093046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.093076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.093445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.093474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.093838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.093866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.094228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.094258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.094605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.094633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.095025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.095054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.095428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.095459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.095788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.095817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.060 qpair failed and we were unable to recover it. 00:32:52.060 [2024-11-19 11:00:31.096195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.060 [2024-11-19 11:00:31.096227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.096572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.096600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.097016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.097046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.097380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.097411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.097851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.097881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.098238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.098268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.098522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.098550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.098894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.098924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.099280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.099310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.099679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.099707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.099968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.099998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.100353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.100383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.100755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.100784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.101146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.101189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.101552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.101582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.101928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.101958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.102338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.102368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.102750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.102778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.103117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.103146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.103560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.103590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.103957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.103986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.104332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.104363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.104724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.104753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.104974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.105001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.105394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.105423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.105775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.105805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.106171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.106209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.106563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.106591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.106948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.106977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.107341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.107370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.107724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.107752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.061 qpair failed and we were unable to recover it. 00:32:52.061 [2024-11-19 11:00:31.108122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.061 [2024-11-19 11:00:31.108151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.108518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.108546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.108899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.108928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.109300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.109332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.109594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.109626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.109996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.110025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.110385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.110416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.110749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.110778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.111034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.111063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.111430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.111461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.111843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.111872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.112238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.112268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.112627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.112657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.113020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.113049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.113390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.113421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.113675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.113703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.114053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.114083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.114445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.114476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.114896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.114927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.115290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.115320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.115678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.115706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.116076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.116105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.116466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.116502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.116874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.116905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.117147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.117199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.117566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.117595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.118030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.118058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.118405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.118435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.118792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.118821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.119186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.119216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.062 [2024-11-19 11:00:31.119577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.062 [2024-11-19 11:00:31.119605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.062 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.119980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.120009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.120356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.120386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.120672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.120701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.120932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.120963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.121305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.121335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.121687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.121717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.122084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.122113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.122489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.122521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.122877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.122906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.123325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.123356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.123722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.123752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.124121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.124150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.124535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.124564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.124929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.124957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.125315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.125345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.125709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.125738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.126080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.126110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.126497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.126527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.126869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.126898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.127264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.127294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.127674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.127702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.128050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.128078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.128438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.128469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.128827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.128855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.129228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.129278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.129627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.129655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.129940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.129968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.063 [2024-11-19 11:00:31.130311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.063 [2024-11-19 11:00:31.130340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.063 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.130613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.130642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.131000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.131028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.131424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.131454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.131795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.131837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.132185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.132214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.132577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.132606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.132980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.133009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.133347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.133377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.133744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.133773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.134140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.134180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.134590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.134620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.134976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.135004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.135264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.135293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.135561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.135588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.135990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.136018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.136413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.136442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.136809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.136839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.137179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.137209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.137558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.137587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.137833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.137864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.138219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.138248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.138606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.138634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.138999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.139028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.139380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.139410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.139784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.139812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.140181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.140211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.140468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.140496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.140840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.140869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.141308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.141338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.141731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.141761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.142119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.142149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.142560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.142590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.142956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.064 [2024-11-19 11:00:31.142985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.064 qpair failed and we were unable to recover it. 00:32:52.064 [2024-11-19 11:00:31.143349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.143380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.143743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.143772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.144020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.144052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.144419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.144450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.144829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.144858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.145218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.145248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.145608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.145637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.145996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.146026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.146301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.146330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.146669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.146697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.147069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.147104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.147472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.147503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.147858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.147886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.148239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.148270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.148655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.148683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.149048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.149076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.149434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.149464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.149826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.149855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.150208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.150238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.150617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.150645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.150999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.151029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.151390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.151421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.151772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.151800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.152053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.152082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.152446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.152476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.152838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.152867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.153228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.153258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.153623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.153651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.153996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.154025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.154388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.154418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.065 qpair failed and we were unable to recover it. 00:32:52.065 [2024-11-19 11:00:31.154778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.065 [2024-11-19 11:00:31.154807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.155181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.155212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.155473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.155506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.155877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.155906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.156264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.156294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.156653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.156682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.157040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.157069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.157423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.157453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.157757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.157788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.158155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.158195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.158571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.158599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.159027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.159055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.159405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.159435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.159796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.159825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.160191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.160221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.160589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.160616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.161003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.161033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.161391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.161421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.161762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.161791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.162153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.162205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.162579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.162612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.162959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.162988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.163369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.163398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.163628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.163659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.164038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.164067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.164410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.164440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.164804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.164834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.165263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.165294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.165653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.165682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.166118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.166147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.166520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.166551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.166915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.166945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.066 [2024-11-19 11:00:31.167256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.066 [2024-11-19 11:00:31.167287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.066 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.167632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.167661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.167897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.167925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.168305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.168336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.168694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.168723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.169084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.169112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.169471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.169500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.169865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.169893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.170251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.170281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.170686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.170715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.171068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.171096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.171460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.171490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.171848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.171876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.172240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.172269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.172634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.172664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.173032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.173061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.173404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.173433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.173804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.173832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.174191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.174221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.174562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.174591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.174957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.174985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.175349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.175380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.175602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.175634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.175900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.175928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.176310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.176341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.176594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.176622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.176973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.177002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.177330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.177360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.177706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.177741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.178101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.178129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.178526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.178557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.178926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.178954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.179300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.179330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.179565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.067 [2024-11-19 11:00:31.179597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.067 qpair failed and we were unable to recover it. 00:32:52.067 [2024-11-19 11:00:31.179983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.180012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.180387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.180417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.180759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.180787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.181218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.181248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.181605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.181634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.181985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.182013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.182264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.182295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.182664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.182692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.183051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.183080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.183449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.183480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.183849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.183878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.184239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.184269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.184548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.184575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.184942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.184973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.185307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.185338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.185710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.185739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.186104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.186132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.186513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.186542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.186912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.186940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.187304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.187334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.187727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.187756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.188091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.188120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.188504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.188534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.188769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.188797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.189173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.189202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.189533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.189561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.189808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.189842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.190107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.190135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.068 [2024-11-19 11:00:31.190532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.068 [2024-11-19 11:00:31.190563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.068 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.190927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.190957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.191316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.191346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.191696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.191724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.192083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.192112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.192466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.192496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.192858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.192894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.193227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.193257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.193625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.193654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.194019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.194048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.194400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.194431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.194757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.194785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.195045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.195074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.195434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.195466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.195821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.195850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.196098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.196131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.196506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.196534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.196792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.196820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.197182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.197213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.197545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.197575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.197926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.197955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.198283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.198314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.198685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.198713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.199077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.199105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.199516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.199546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.199906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.199935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.200299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.200330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.069 [2024-11-19 11:00:31.200681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.069 [2024-11-19 11:00:31.200711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.069 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.201044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.201072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.201421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.201453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.201817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.201845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.202185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.202215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.202583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.202612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.202982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.203011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.203386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.203416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.203785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.203816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.204058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.204086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.204441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.204471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.204839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.204867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.205229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.205258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.205618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.205646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.206007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.206038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.206396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.206426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.206791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.206820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.207192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.207222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.207594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.207623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.207988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.208025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.208390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.208421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.208782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.208810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.209180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.209209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.209505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.209533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.209900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.209929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.210298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.210328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.210664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.210693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.211071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.211100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.070 qpair failed and we were unable to recover it. 00:32:52.070 [2024-11-19 11:00:31.211466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.070 [2024-11-19 11:00:31.211497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.211861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.211889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.212313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.212343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.212693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.212721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.213062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.213091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.213460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.213491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.213837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.213865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.214228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.214259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.214610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.214639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.214993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.215023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.215308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.215338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.215722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.215751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.216005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.216034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.216377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.216408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.216635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.216664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.217028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.217058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.217398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.217430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.217779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.217809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.218173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.218204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.218579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.218608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.218982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.219012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.219382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.219414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.219786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.219816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.220066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.220096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.220261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.220293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.220539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.220569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.220810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.220840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.221201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.221232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.071 qpair failed and we were unable to recover it. 00:32:52.071 [2024-11-19 11:00:31.221543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.071 [2024-11-19 11:00:31.221572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.072 qpair failed and we were unable to recover it. 00:32:52.072 [2024-11-19 11:00:31.221920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.072 [2024-11-19 11:00:31.221951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.072 qpair failed and we were unable to recover it. 00:32:52.072 [2024-11-19 11:00:31.222321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.072 [2024-11-19 11:00:31.222352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.072 qpair failed and we were unable to recover it. 00:32:52.072 [2024-11-19 11:00:31.222725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.072 [2024-11-19 11:00:31.222760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.072 qpair failed and we were unable to recover it. 00:32:52.072 [2024-11-19 11:00:31.223108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.072 [2024-11-19 11:00:31.223138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.072 qpair failed and we were unable to recover it. 00:32:52.072 [2024-11-19 11:00:31.223389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.072 [2024-11-19 11:00:31.223419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.072 qpair failed and we were unable to recover it. 00:32:52.072 [2024-11-19 11:00:31.223781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.072 [2024-11-19 11:00:31.223810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.072 qpair failed and we were unable to recover it. 00:32:52.072 [2024-11-19 11:00:31.224157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.072 [2024-11-19 11:00:31.224201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.072 qpair failed and we were unable to recover it. 00:32:52.072 [2024-11-19 11:00:31.224563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.072 [2024-11-19 11:00:31.224593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.072 qpair failed and we were unable to recover it. 00:32:52.072 [2024-11-19 11:00:31.224959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.072 [2024-11-19 11:00:31.224989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.072 qpair failed and we were unable to recover it. 00:32:52.072 [2024-11-19 11:00:31.225231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.072 [2024-11-19 11:00:31.225263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.072 qpair failed and we were unable to recover it. 00:32:52.072 [2024-11-19 11:00:31.225572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.072 [2024-11-19 11:00:31.225602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.072 qpair failed and we were unable to recover it. 00:32:52.072 [2024-11-19 11:00:31.225990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.072 [2024-11-19 11:00:31.226021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.072 qpair failed and we were unable to recover it. 00:32:52.072 [2024-11-19 11:00:31.226377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.072 [2024-11-19 11:00:31.226410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.072 qpair failed and we were unable to recover it. 00:32:52.072 [2024-11-19 11:00:31.226792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.072 [2024-11-19 11:00:31.226822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.072 qpair failed and we were unable to recover it. 00:32:52.350 [2024-11-19 11:00:31.227047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.350 [2024-11-19 11:00:31.227078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.350 qpair failed and we were unable to recover it. 00:32:52.350 [2024-11-19 11:00:31.227430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.350 [2024-11-19 11:00:31.227462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.350 qpair failed and we were unable to recover it. 00:32:52.350 [2024-11-19 11:00:31.227821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.350 [2024-11-19 11:00:31.227852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.350 qpair failed and we were unable to recover it. 00:32:52.350 [2024-11-19 11:00:31.228219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.350 [2024-11-19 11:00:31.228249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.350 qpair failed and we were unable to recover it. 00:32:52.350 [2024-11-19 11:00:31.228626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.350 [2024-11-19 11:00:31.228656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.350 qpair failed and we were unable to recover it. 00:32:52.350 [2024-11-19 11:00:31.229033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.350 [2024-11-19 11:00:31.229064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.350 qpair failed and we were unable to recover it. 00:32:52.350 [2024-11-19 11:00:31.229419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.350 [2024-11-19 11:00:31.229449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.350 qpair failed and we were unable to recover it. 00:32:52.350 [2024-11-19 11:00:31.229701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.229733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.229995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.230024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.230253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.230282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.230676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.230704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.231055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.231084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.231334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.231363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.231781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.231809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.232181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.232210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.232604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.232634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.232869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.232898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.233292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.233322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.233702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.233731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.234103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.234131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.234481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.234511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.234655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.234684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.235035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.235064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.235458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.235488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.235845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.235874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.236206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.236237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.236605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.236634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.237053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.237081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.237430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.237468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.237825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.237854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.238211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.238241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.238635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.238663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.238911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.238939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.239292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.239328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.239661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.239690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.240055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.240083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.240473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.240503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.240838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.240867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.241209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.241240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.241596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.241627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.241989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.242021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.242380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.242409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.242849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.351 [2024-11-19 11:00:31.242877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.351 qpair failed and we were unable to recover it. 00:32:52.351 [2024-11-19 11:00:31.243246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.243277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.243653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.243682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.244061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.244090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.244444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.244474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.244837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.244867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.245234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.245264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.245663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.245692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.246043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.246071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.246448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.246477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.246871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.246900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.247350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.247381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.247716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.247744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.248125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.248155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.248400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.248431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.248814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.248845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.249194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.249224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.249583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.249612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.249972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.250000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.250343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.250373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.250630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.250658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.250895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.250924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.251273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.251303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.251675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.251703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.252094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.252122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.252493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.252524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.252865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.252902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.253248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.253279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.253648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.253677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.254125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.254153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.254569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.254598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.254961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.254990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.255338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.255368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.255720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.255749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.256102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.256131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.256391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.256420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.256794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.256823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.257185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.352 [2024-11-19 11:00:31.257217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.352 qpair failed and we were unable to recover it. 00:32:52.352 [2024-11-19 11:00:31.257585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.257615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.257963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.257993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.258343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.258375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.258744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.258774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.259143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.259196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.259557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.259587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.259968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.259997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.260269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.260299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.260571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.260600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.260976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.261005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.261348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.261377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.261748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.261777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.262184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.262216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.262328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.262355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.262746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.262775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.263154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.263211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.263539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.263569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.264015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.264043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.264280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.264310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.264677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.264707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.265056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.265084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.265484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.265514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.265752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.265780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.266157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.266200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.266617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.266646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.267007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.267038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.267393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.267425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.267802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.267831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.268193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.268229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.268457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.268485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.268893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.268923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.269199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.269229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.269629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.269658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.269995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.270024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.270378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.270410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.270643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.270671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.353 [2024-11-19 11:00:31.271045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.353 [2024-11-19 11:00:31.271075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.353 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.271330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.271360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.271641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.271669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.272082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.272111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.272510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.272540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.272890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.272919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.273296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.273326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.273700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.273729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.274089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.274117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.274339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.274369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.274794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.274824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.275196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.275227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.275450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.275479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.275852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.275881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.276261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.276290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.276728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.276757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.277110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.277138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.277520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.277549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.277888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.277918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.278280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.278311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.278687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.278716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.279077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.279105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.279456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.279486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.279819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.279847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.280213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.280245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.280612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.280640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.281028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.281057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.281445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.281475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.281881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.281910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.282278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.282309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.282651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.282681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.283045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.283075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.283390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.283427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.283779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.283808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.284120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.284149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.284499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.284528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.284899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.284929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.354 [2024-11-19 11:00:31.285293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.354 [2024-11-19 11:00:31.285324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.354 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.285693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.285722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.285975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.286006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.286359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.286388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.286674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.286702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.287078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.287109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.287457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.287487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.287843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.287871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.288219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.288249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.288514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.288543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.288892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.288921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.289333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.289365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.289726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.289755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.290113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.290142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.290484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.290513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.290879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.290908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.291193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.291223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.291607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.291636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.291978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.292007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.292372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.292402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.292773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.292801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.293144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.293186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.293583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.293615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.293973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.294001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.294349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.294379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.294642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.294670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.294926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.294954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.295394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.295424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.295762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.295793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.296119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.296148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.296517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.296546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.296949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.355 [2024-11-19 11:00:31.296976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.355 qpair failed and we were unable to recover it. 00:32:52.355 [2024-11-19 11:00:31.297301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.297331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.297747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.297776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.298131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.298172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.298533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.298568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.298911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.298940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.299298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.299328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.299714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.299742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.300106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.300134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.300489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.300519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.300872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.300901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.301271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.301300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.301737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.301766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.302122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.302150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.302375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.302403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.302766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.302796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.303144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.303183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.303549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.303578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.303941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.303972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.304352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.304383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.304749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.304779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.305034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.305065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.305467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.305497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.305832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.305861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.306190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.306219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.306578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.306606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.306974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.307004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.307354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.307383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.307737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.307768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.308116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.308145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.308514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.308544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.308880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.308911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.309316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.309347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.309717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.309747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.310105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.310133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.310509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.310538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.310895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.310924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.356 [2024-11-19 11:00:31.311289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.356 [2024-11-19 11:00:31.311318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.356 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.311706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.311735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.312089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.312118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.312473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.312504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.312852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.312881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.313235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.313265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.313536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.313565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.313928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.313958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.314320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.314351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.314712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.314740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.315097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.315125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.315507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.315537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.315977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.316007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.316262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.316294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.316571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.316601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.316965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.316993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.317344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.317375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.317741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.317770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.318130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.318169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.318539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.318568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.318977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.319006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.319345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.319376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.319720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.319754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.320093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.320131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.320472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.320502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.320856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.320885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.321246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.321276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.321583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.321618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.321962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.321991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.322364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.322397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.322824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.322853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.323200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.323231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.323603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.323632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.323965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.323995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.324265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.324303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.324635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.324665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.325053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.325083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.325425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.357 [2024-11-19 11:00:31.325457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.357 qpair failed and we were unable to recover it. 00:32:52.357 [2024-11-19 11:00:31.325825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.325853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.326213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.326243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.326610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.326639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.327013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.327041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.327383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.327414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.327774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.327803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.328173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.328203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.328562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.328590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.328929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.328957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.329405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.329435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.329792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.329821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.330179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.330208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.330586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.330615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.330978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.331008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.331281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.331311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.331666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.331695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.332060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.332091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.332460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.332491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.332850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.332879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.333241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.333271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.333624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.333653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.334013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.334043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.334426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.334456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.334818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.334848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.335205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.335235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.335658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.335688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.336039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.336068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.336411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.336442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.336802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.336835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.337206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.337242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.337639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.337670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.338002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.338031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.338384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.338417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.338771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.338799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.339173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.339204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.339557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.339586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.339927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.358 [2024-11-19 11:00:31.339962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.358 qpair failed and we were unable to recover it. 00:32:52.358 [2024-11-19 11:00:31.340295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.340333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.340709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.340738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.341102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.341132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.341576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.341606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.341836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.341867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.342243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.342275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.342657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.342687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.343044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.343073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.343436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.343467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.343832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.343861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.344226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.344256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.344607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.344638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.344879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.344912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.345689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.345734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.346136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.346209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.346549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.346578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.346978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.347009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.347244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.347274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.347671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.347700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.347950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.347980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.348338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.348368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.348744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.348773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.349148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.349187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.349588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.349616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.350035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.350065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.350323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.350353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.350723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.350753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.351115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.351144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.351518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.351547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.351903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.351932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.352235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.352265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.352620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.352650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.353014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.353042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.353433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.353463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.353825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.353853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.354221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.354250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.359 [2024-11-19 11:00:31.354626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.359 [2024-11-19 11:00:31.354655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.359 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.355024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.355053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.355405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.355435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.355740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.355776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.356141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.356183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.356534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.356563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.356868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.356897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.357221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.357255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.357686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.357715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.358075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.358105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.358476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.358506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.358880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.358909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.359351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.359382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.359722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.359750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.360115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.360143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.360494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.360525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.360884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.360915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.361278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.361309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.361687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.361716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.362154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.362204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.362570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.362599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.362958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.362987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.363243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.363272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.363645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.363676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.364035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.364065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.364429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.364458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.364825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.364855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.365228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.365259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.365629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.365658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.365915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.365947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.366302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.360 [2024-11-19 11:00:31.366334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.360 qpair failed and we were unable to recover it. 00:32:52.360 [2024-11-19 11:00:31.366591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.366619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.366871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.366900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.367252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.367282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.367659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.367687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.368056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.368085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.368440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.368470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.368829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.368858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.369221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.369250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.369649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.369677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.370034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.370064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.370431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.370461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.370798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.370826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.371186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.371223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.371577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.371607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.371976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.372005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.372386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.372417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.372786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.372815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.373097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.373125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.373476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.373507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.373865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.373895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.374259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.374289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.374659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.374688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.375050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.375088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.375431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.375461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.375818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.375847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.376215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.376245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.376631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.376661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.377016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.377046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.377412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.377441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.377799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.377828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.378190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.378221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.378584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.378612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.378963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.378992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.379360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.379391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.379764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.379793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.380151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.380192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.361 qpair failed and we were unable to recover it. 00:32:52.361 [2024-11-19 11:00:31.380453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.361 [2024-11-19 11:00:31.380481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.380829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.380857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.381218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.381248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.381625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.381655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.382044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.382073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.382505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.382536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.382903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.382931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.383302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.383331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.383673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.383703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.384062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.384093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.384525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.384554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.384904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.384933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.385287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.385317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.385698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.385727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.386094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.386124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.386511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.386542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.386889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.386924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.387377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.387408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.387767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.387796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.388178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.388209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.388590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.388618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.388969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.388998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.389291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.389320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.389677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.389705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.390068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.390096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.390454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.390484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.390841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.390871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.391230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.391261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.391622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.391651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.392011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.392039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.392378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.392409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.392778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.392807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.393177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.393208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.393564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.393592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.393955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.393984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.394347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.394377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.394817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.394845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.362 [2024-11-19 11:00:31.395207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.362 [2024-11-19 11:00:31.395238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.362 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.395598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.395626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.395987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.396016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.396369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.396400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.396749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.396778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.397037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.397070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.397436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.397466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.397805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.397842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.398183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.398213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.398562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.398591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.398892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.398921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.399239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.399269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.399640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.399669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.400032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.400061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.400423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.400453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.400836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.400865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.401227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.401257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.401689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.401717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.402063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.402093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.402447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.402484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.402821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.402851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.403217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.403249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.403594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.403624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.404002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.404031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.404407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.404437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.404804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.404833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.405090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.405119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.405470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.405502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.405863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.405893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.406263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.406294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.406652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.406681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.407052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.407081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.407438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.407469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.407800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.407830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.408175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.408206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.408555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.408585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.408996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.409026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.409382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.363 [2024-11-19 11:00:31.409412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.363 qpair failed and we were unable to recover it. 00:32:52.363 [2024-11-19 11:00:31.409764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.409793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.410170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.410202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.410529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.410557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.410920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.410950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.411319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.411349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.411716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.411744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.412105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.412134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.412516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.412546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.412908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.412939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.413305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.413336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.413691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.413719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.414078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.414106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.414450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.414481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.414838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.414867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.415235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.415266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.415624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.415653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.416021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.416050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.416394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.416425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.416786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.416815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.417186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.417216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.417568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.417598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.417953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.417987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.418287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.418317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.418682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.418711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.419014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.419042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.419338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.419368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.419745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.419775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.420141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.420189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.420601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.420631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.420965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.420993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.421373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.421404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.421764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.421792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.364 qpair failed and we were unable to recover it. 00:32:52.364 [2024-11-19 11:00:31.422135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.364 [2024-11-19 11:00:31.422173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.422527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.422557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.422923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.422951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.423296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.423328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.423654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.423682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.424044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.424074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.424437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.424467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.424812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.424841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.425089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.425118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.425462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.425492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.425845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.425874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.426121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.426153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.426556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.426586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.426955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.426985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.427239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.427272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.427629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.427657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.428028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.428057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.428413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.428443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.428798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.428829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.429195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.429226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.429573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.429603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.429943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.429971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.430324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.430355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.430633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.430661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.431021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.431051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.431440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.431470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.431827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.431855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.432200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.432231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.432585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.432614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.432950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.432985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.433325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.433357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.433727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.433756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.434118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.434147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.434389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.434421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.434768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.434798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.435177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.435209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.365 [2024-11-19 11:00:31.435501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.365 [2024-11-19 11:00:31.435531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.365 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.435955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.435984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.436311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.436343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.436703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.436732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.437094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.437123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.437485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.437514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.437856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.437886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.438249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.438279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.438635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.438663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.439028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.439057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.439417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.439448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.439805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.439834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.440194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.440225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.440579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.440607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.440983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.441011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.441307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.441336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.441684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.441713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.442072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.442101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.442478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.442509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.442858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.442888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.443240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.443270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.443638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.443667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.443955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.443984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.444325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.444358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.444702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.444732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.445089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.445117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.445554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.445584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.445940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.445969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.446312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.446341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.446702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.446732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.447098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.447127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.447328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.447360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.447738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.447766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.448201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.448239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.448659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.448689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.449047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.449076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.449441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.449475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.366 [2024-11-19 11:00:31.449872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.366 [2024-11-19 11:00:31.449902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.366 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.450274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.450304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.450552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.450580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.450823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.450855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.451234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.451266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.451638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.451667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.452040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.452068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.452413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.452443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.452805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.452834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.453183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.453213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.453561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.453589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.453845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.453875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.454235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.454266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.454641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.454669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.455105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.455133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.455489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.455519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.455891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.455920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.456267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.456299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.456666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.456695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.457064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.457092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.457424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.457455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.457816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.457844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.458195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.458226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.458585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.458614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.458982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.459010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.459301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.459330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.459710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.459739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.460104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.460133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.460508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.460538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.460884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.460914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.461356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.461387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.461747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.461776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.462142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.462182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.462522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.462552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.462914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.462943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.463295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.463326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.463594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.463628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.463979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.367 [2024-11-19 11:00:31.464007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.367 qpair failed and we were unable to recover it. 00:32:52.367 [2024-11-19 11:00:31.464392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.464422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.464806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.464835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.465191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.465222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.465587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.465615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.465767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.465798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.466147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.466188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.466540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.466570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.467015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.467044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.467386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.467416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.467761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.467790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.468155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.468195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.468533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.468562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.468958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.468986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.469347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.469378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.469741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.469771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.470040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.470068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.470319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.470350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.470699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.470728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.471099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.471127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.471417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.471447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.471798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.471828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.472073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.472101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.472461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.472492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.472848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.472884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.473254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.473284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.473560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.473589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.473937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.473967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.474318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.474350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.474698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.474727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.475088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.475118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.475526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.475556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.475810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.475839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.476231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.476263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.476599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.476627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.476994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.477024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.477392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.477423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.477783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.368 [2024-11-19 11:00:31.477811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.368 qpair failed and we were unable to recover it. 00:32:52.368 [2024-11-19 11:00:31.478193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.478224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.478606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.478642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.478928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.478956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.479316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.479346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.479755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.479784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.480127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.480156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.480391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.480419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.480772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.480802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.481197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.481228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.481589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.481618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.481958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.481987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.482222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.482252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.482635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.482665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.483003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.483034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.483400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.483431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.483786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.483816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.484182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.484211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.484566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.484595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.485068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.485098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.485452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.485482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.485841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.485871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.486244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.486274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.486636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.486666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.487026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.487055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.487308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.487338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.487589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.487618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.487970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.487998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.488431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.488461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.488854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.488884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.489124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.489152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.489559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.489588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.489940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.369 [2024-11-19 11:00:31.489971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.369 qpair failed and we were unable to recover it. 00:32:52.369 [2024-11-19 11:00:31.490327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.490358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.490686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.490715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.491084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.491112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.491514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.491544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.491919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.491948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.492309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.492339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.492712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.492741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.493205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.493236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.493590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.493619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.494013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.494049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.494401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.494433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.494793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.494823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.495191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.495220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.495586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.495616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.495968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.495996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.496358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.496391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.496682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.496711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.497043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.497073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.497316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.497347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.497774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.497802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.498055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.498083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.498449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.498480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.498915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.498944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.499274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.499304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.499544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.499574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.499919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.499948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.500313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.500344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.500700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.500730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.501144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.501188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.501590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.501618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.501771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.501799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.502187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.502219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.502573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.502601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.502968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.502998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.503383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.503414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.503761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.503790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.503935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.370 [2024-11-19 11:00:31.503966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.370 qpair failed and we were unable to recover it. 00:32:52.370 [2024-11-19 11:00:31.504323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.504353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.504707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.504736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.505073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.505103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.505347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.505380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.505725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.505754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.506105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.506134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.506519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.506548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.506920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.506949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.507304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.507333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.507551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.507580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.507943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.507972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.508290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.508320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.508662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.508691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.509048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.509077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.509445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.509475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.509868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.509899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.510283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.510313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.510679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.510708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.511066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.511095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.511350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.511384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.511831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.511862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.512203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.512234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.512612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.512640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.513017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.513046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.513388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.513419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.513824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.513853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.514227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.514259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.514626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.514653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.515029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.515057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.515442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.515474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.515722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.515750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.516095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.516123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.516521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.516552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.516910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.516938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.517287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.517317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.517689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.517717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.518088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.518116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.371 qpair failed and we were unable to recover it. 00:32:52.371 [2024-11-19 11:00:31.518483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.371 [2024-11-19 11:00:31.518514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.518918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.518947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.519307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.519345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.519703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.519732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.520099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.520127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.520521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.520551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.520912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.520941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.521186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.521216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.521584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.521611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.521962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.521991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.522414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.522444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.522730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.522758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.523137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.523176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.523554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.523583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.523942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.523969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.524326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.524356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.524727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.524756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.524996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.525027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.525409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.525440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.525805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.525834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.526193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.526223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.526587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.526614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.526992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.527020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.527401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.527431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.527778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.527806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.528197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.528227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.528626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.528654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.529020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.529048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.529385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.529414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.529753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.372 [2024-11-19 11:00:31.529783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.372 qpair failed and we were unable to recover it. 00:32:52.372 [2024-11-19 11:00:31.530150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.530190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.530604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.530636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.530877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.530907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.531290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.531321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.531668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.531696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.532028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.532056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.532397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.532429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.532781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.532810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.533180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.533212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.533580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.533608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.533939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.533967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.534332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.534363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.534690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.534724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.535089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.535119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.535482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.535513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.535880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.535909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.536269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.536300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.536668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.536696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.537062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.537089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.537453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.537483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.537843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.537872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.538224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.538254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.538601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.538629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.538992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.539021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.539394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.539423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.539790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.539819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.540201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.540233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.540591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.540620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.540979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.541008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.541323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.541352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.541725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.541754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.542119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.542147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.644 [2024-11-19 11:00:31.542527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.644 [2024-11-19 11:00:31.542555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.644 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.542915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.542944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.543291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.543321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.543678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.543707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.544135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.544181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.544544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.544572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.544923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.544953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.545329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.545361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.545718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.545746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.546112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.546140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.546523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.546553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.546818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.546845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.547189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.547219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.547469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.547498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.547867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.547895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.548259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.548290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.548657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.548686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.549044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.549073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.549437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.549468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.549811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.549839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.550196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.550232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.550591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.550620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.551000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.551029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.551303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.551332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.551692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.551723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.552086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.552115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.552554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.552585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.552828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.552856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.553223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.553253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.553604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.553633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.553992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.554020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.554393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.554424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.554769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.554798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.555176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.555207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.555574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.555603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.555940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.555970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.556325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.556356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.556707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.645 [2024-11-19 11:00:31.556737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.645 qpair failed and we were unable to recover it. 00:32:52.645 [2024-11-19 11:00:31.557095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.557124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.557502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.557532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.557871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.557900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.558244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.558276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.558543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.558572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.558920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.558949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.559363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.559393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.559748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.559776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.560142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.560188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.560553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.560581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.560953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.560982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.561365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.561395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.561758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.561786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.562170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.562201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.562560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.562589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.562935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.562964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.563322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.563354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.563723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.563751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.564112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.564141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.564516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.564546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.564908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.564937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.565304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.565334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.565699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.565739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.565990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.566019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.566363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.566394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.566616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.566644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.566914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.566942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.567199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.567229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.567604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.567632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.567997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.568027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.568393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.568424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.568800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.568828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.569075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.569103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.569490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.569520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.569897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.569926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.570289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.570319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.570676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.570706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.571054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.646 [2024-11-19 11:00:31.571083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.646 qpair failed and we were unable to recover it. 00:32:52.646 [2024-11-19 11:00:31.571456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.571488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.571824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.571854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.572222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.572252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.572611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.572639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.572994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.573022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.573385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.573416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.573774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.573804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.574171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.574201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.574557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.574585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.574932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.574960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.575222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.575252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.575617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.575646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.576012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.576041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.576394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.576424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.576788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.576816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.577194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.577224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.577569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.577597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.577948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.577977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.578406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.578436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.578778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.578807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.579179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.579210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.579573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.579602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.579961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.579989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.580242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.580275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.580648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.580684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.581039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.581068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.581428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.581458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.581822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.581851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.582171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.582202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.582567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.582596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.582970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.582998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.583263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.583292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.583669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.583699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.584062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.584092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.584346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.584377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.584712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.584739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.647 [2024-11-19 11:00:31.585093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.647 [2024-11-19 11:00:31.585122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.647 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.585472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.585502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.585864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.585893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.586259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.586288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.586671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.586700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.587061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.587089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.587446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.587476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.587843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.587871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.588233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.588263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.588615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.588644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.589009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.589037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.589414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.589443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.589830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.589858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.590214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.590244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.590613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.590642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.591073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.591102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.591508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.591538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.591890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.591920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.592289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.592320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.592686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.592714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.593073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.593101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.593459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.593489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.593749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.593777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.594020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.594049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.594394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.594423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.594788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.594818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.595178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.595209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.595603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.595631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.648 [2024-11-19 11:00:31.595992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.648 [2024-11-19 11:00:31.596026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.648 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.596393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.596424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.596754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.596782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.597146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.597185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.597480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.597508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.597756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.597784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.598139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.598176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.598544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.598572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.598820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.598848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.599194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.599224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.599591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.599619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.599983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.600010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.600388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.600418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.600771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.600800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.601240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.601271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.601625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.601653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.601990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.602020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.602396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.602426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.602644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.602671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.602986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.603015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.603428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.603458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.603816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.603843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.604193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.604224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.604575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.604605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.604980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.605008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.605367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.605397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.605753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.605781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.606137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.606173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.606511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.606540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.606902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.606931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.607286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.607317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.607679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.607708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.608104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.608132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.608548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.608578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.608934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.608961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.609352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.609382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.609739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.609768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.610111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.610140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.649 qpair failed and we were unable to recover it. 00:32:52.649 [2024-11-19 11:00:31.610500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.649 [2024-11-19 11:00:31.610530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.610891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.610921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.611276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.611311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.611711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.611738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.612092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.612119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.612490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.612519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.612893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.612919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.613263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.613292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.613648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.613676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.614041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.614068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.614398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.614427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.614678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.614710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.615082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.615110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.615442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.615472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.615826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.615854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.616227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.616257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.616617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.616646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.617006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.617035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.617382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.617414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.617781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.617811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.618190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.618221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.618488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.618520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.618769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.618801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.619151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.619191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.619576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.619606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.619857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.619886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.620235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.620266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.620636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.620666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.620965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.620996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.621336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.621368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.621589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.621622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.622008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.622037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.622406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.622438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.622796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.622826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.623109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.623138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.623540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.623571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.623828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.650 [2024-11-19 11:00:31.623859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.650 qpair failed and we were unable to recover it. 00:32:52.650 [2024-11-19 11:00:31.624208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.624239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.624600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.624630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.625045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.625075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.625422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.625453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.625857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.625886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.626256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.626303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.626641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.626671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.627037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.627066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.627314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.627347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.627718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.627748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.628124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.628152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.628572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.628602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.628961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.628990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.629387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.629419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.629775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.629803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.630168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.630199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.630556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.630585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.630843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.630871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.631122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.631149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.631533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.631564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.631923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.631951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.632319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.632349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.632716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.632745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.633181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.633212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.633555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.633584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.633957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.633986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.634372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.634403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.634761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.634789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.634932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.634962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.635370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.635400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.635756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.635785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.636133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.636170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.636564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.636594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.636957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.636986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.637350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.637380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.637760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.637789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.638011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.638039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.651 [2024-11-19 11:00:31.638470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.651 [2024-11-19 11:00:31.638501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.651 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.638822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.638853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.639108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.639136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.639509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.639539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.639886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.639915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.640152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.640190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.640577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.640605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.640961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.640991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.641347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.641391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.641753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.641783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.642181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.642212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.642570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.642599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.642858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.642886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.643136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.643180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.643586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.643617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.643981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.644009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.644390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.644421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.644663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.644692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.645066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.645095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.645493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.645523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.645904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.645932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.646288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.646318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.646692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.646721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.647084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.647113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.647467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.647498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.647937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.647965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.648316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.648348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.648727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.648755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.649104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.649132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.649501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.649531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.649898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.649927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.650299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.650329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.650726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.650754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.650990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.651017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.651361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.651391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.652 [2024-11-19 11:00:31.651618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.652 [2024-11-19 11:00:31.651646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.652 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.652005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.652034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.652429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.652461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.652809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.652838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.653204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.653234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.653601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.653629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.653995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.654023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.654392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.654423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.654794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.654823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.655173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.655204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.655574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.655602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.655844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.655872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.656222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.656253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.656629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.656664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.657023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.657051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.657419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.657448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.657807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.657838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.658202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.658232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.658593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.658621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.658988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.659016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.659383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.659416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.659765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.659793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.660182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.660213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.660566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.660594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.660986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.661015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.661391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.661421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.661763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.661791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.662223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.662254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.662602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.662630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.662998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.663026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.663432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.663462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.663815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.663843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.664190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.653 [2024-11-19 11:00:31.664220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.653 qpair failed and we were unable to recover it. 00:32:52.653 [2024-11-19 11:00:31.664592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.664621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.664986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.665014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.665393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.665423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.665780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.665809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.666151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.666190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.666625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.666654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.667016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.667045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.667386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.667417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.667787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.667817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.668255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.668285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.668639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.668669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.668916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.668947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.669291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.669321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.669651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.669680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.670046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.670074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.670433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.670463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.670828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.670856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.671232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.671261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.671511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.671538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.671889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.671919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.672292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.672330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.672750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.672780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.673133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.673178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.673543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.673573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.673919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.673949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.674324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.674358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.674719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.674749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.675115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.675145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.675544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.675574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.675932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.675961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.676319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.676351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.676710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.676739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.677180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.677212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.677572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.677602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.677966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.677996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.678341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.678373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.678806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.654 [2024-11-19 11:00:31.678836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.654 qpair failed and we were unable to recover it. 00:32:52.654 [2024-11-19 11:00:31.679196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.679226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.679585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.679616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.679978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.680008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.680349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.680381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.680737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.680768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.681121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.681151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.681521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.681552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.681795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.681828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.682182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.682215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.682577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.682606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.682973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.683005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.683363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.683394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.683749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.683779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.684215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.684247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.684534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.684563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.684920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.684951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.685319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.685352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.685698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.685730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.686092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.686124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.686485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.686517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.686760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.686793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.687139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.687194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.687595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.687627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.688052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.688088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.688428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.688463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.688835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.688867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.689231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.689262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.689628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.689658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.690032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.690064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.690432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.690463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.690812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.690843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.691201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.691232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.691590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.691619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.691974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.692005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.692394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.692427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.692810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.692842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.693189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.693221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.655 qpair failed and we were unable to recover it. 00:32:52.655 [2024-11-19 11:00:31.693574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.655 [2024-11-19 11:00:31.693605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.693963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.693994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.694263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.694295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.694685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.694717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.695064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.695095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.695457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.695489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.695851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.695882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.696238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.696271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.696641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.696672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.696918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.696948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.697300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.697332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.697702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.697731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.698134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.698175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.698557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.698589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.698941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.698971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.699328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.699361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.699707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.699737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.700180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.700213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.700567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.700597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.700954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.700983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.701343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.701377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.701730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.701761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.702125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.702157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.702517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.702548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.702904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.702935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.703297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.703329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.703690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.703720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.704084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.704114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.704351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.704383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.704745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.704775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.705180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.705214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.705605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.705637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.705981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.706011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.706371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.706402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.706757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.706788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.707150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.707199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.707526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.707556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.707925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.656 [2024-11-19 11:00:31.707958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.656 qpair failed and we were unable to recover it. 00:32:52.656 [2024-11-19 11:00:31.708313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.708346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.708708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.708737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.709075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.709106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.709462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.709494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.709847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.709877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.710216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.710248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.710620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.710650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.711015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.711045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.711411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.711444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.711800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.711830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.712191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.712222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.712462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.712495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.712845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.712876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.713237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.713270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.713619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.713648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.714005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.714040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.714407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.714438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.714802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.714833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.715077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.715109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.715490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.715522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.715874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.715906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.716240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.716272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.716525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.716556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.716895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.716925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.717284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.717316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.717690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.717720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.718074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.718104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.718484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.718516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.718848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.718880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.719295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.719327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.719714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.719746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.720140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.720182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.720545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.720574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.720945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.720975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.721333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.721364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.721719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.721750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.722118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.722148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.657 [2024-11-19 11:00:31.722535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.657 [2024-11-19 11:00:31.722567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.657 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.722923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.722953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.723316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.723348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.723726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.723757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.724122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.724151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.724533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.724565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.724705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.724736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.725126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.725156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.725531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.725563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.725925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.725957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.726321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.726354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.726730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.726760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.727124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.727155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.727544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.727575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.728016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.728047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.728412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.728446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.728694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.728727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.729142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.729182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.729564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.729602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.729958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.729990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.730241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.730272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.730595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.730625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.730981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.731011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.731382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.731413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.731771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.731801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.732154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.732199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.732548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.732579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.732832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.732862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.733118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.733150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.733591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.733622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.733977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.734009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.734385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.734418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.734781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.734812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.735219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.658 [2024-11-19 11:00:31.735250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.658 qpair failed and we were unable to recover it. 00:32:52.658 [2024-11-19 11:00:31.735629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.735661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.736027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.736059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.736418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.736452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.736808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.736839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.737082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.737115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.737418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.737452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.737901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.737934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.738287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.738319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.738690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.738722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.739070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.739101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.739464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.739498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.739749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.739781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.740032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.740063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.740458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.740491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.740739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.740769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.741152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.741203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.741553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.741584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.741813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.741845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.742215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.742248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.742592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.742621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.743008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.743039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.743406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.743438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.743669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.743698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.744061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.744092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.744469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.744508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.744858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.744889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.745235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.745266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.745638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.745670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.746034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.746065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.746414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.746445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.746808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.746839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.747202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.747232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.747592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.747622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.747856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.747888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.748237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.748270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.748647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.748679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.749076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.749108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.659 [2024-11-19 11:00:31.749511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.659 [2024-11-19 11:00:31.749543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.659 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.749904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.749934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.750297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.750328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.750692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.750722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.751079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.751110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.751495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.751526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.751889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.751919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.752283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.752315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.752645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.752676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.753041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.753072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.753451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.753482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.753843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.753873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.754234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.754265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.754708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.754740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.755094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.755127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.755515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.755547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.755906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.755936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.756303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.756335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.756697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.756726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.757085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.757116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.757399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.757431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.757849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.757879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.758235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.758266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.758630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.758661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.759015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.759045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.759425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.759456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.759807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.759838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.760195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.760234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.760594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.760625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.760966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.760996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.761347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.761378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.761730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.761760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.762122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.762152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.762525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.762557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.762862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.762893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.660 qpair failed and we were unable to recover it. 00:32:52.660 [2024-11-19 11:00:31.763181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.660 [2024-11-19 11:00:31.763214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.763560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.763589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.763949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.763978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.764344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.764375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.764734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.764765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.765132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.765178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.765567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.765598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.765923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.765952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.766317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.766349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.766707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.766738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.767095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.767127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.767494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.767525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.767882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.767912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.768277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.768309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.768546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.768580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.768935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.768964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.769325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.769357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.769738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.769768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.770120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.770150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.770541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.770572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.770931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.770962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.771321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.771354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.771697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.771727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.772102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.772134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.772474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.772507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.772865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.772896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.773261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.773295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.773648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.773678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.774112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.774144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.774532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.774564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.774919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.774949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.775349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.775382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.775737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.775772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.776133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.776173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.776524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.776555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.776909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.776942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.777339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.777372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.777719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.661 [2024-11-19 11:00:31.777750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.661 qpair failed and we were unable to recover it. 00:32:52.661 [2024-11-19 11:00:31.778136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.778176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.778515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.778548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.778964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.778995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.779344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.779375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.779738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.779768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.780129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.780169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.780497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.780527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.780883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.780913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.781275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.781308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.781653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.781684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.782038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.782074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.782429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.782461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.782816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.782846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.783099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.783129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.783518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.783552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.783908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.783938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.784304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.784337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.784702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.784733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.785077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.785107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.785507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.785539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.785897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.785927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.786288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.786320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.786682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.786712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.787068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.787100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.787486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.787519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.787852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.787884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.788232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.788264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.788640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.788671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.789029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.789060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.789336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.789370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.789726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.789758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.790121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.790152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.790553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.662 [2024-11-19 11:00:31.790587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.662 qpair failed and we were unable to recover it. 00:32:52.662 [2024-11-19 11:00:31.790928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.790959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.791209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.791253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.791608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.791638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.792006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.792036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.792402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.792433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.792848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.792880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.793226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.793258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.793611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.793641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.794006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.794037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.794401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.794433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.794786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.794815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.795181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.795213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.795559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.795590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.795950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.795980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.796232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.796267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.796654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.796687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.797033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.797064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.797308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.797341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.797705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.797736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.798100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.798130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.798498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.798530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.798887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.798917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.799285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.799317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.799678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.799708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.800076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.800109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.800470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.800503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.800852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.800881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.801323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.801355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.801593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.801626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.801980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.802013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.802353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.802384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.802737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.802768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.803126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.803156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.803537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.803568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.803924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.803955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.804310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.804342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.663 [2024-11-19 11:00:31.804695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.663 [2024-11-19 11:00:31.804725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.663 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.805063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.805094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.805454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.805486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.805845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.805876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.806228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.806261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.806630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.806667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.807024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.807054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.807408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.807439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.807839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.807868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.808222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.808254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.808500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.808533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.808761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.808792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.809145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.809200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.809570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.809602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.809963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.809993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.810366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.810398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.810757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.810787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.811145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.811185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.811537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.811567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.811933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.811962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.812220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.812252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.812615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.812645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.813004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.813034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.813395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.813426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.813777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.813807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.814053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.814083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.814422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.814455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.814804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.814835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.815193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.815225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.815394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.815426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.815828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.815858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.816217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.816250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.816628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.816660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.816894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.816923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.817263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.817296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.817665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.817697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.818046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.818077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.818435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.818468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.664 qpair failed and we were unable to recover it. 00:32:52.664 [2024-11-19 11:00:31.818824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.664 [2024-11-19 11:00:31.818857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.819212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.819243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.819677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.819708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.820056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.820088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.820472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.820503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.820881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.820913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.821272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.821304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.821661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.821696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.822048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.822080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.822432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.822466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.822823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.822854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.823215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.823249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.823631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.823662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.824021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.824051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.824412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.824445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.824800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.824830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.825172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.825204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.825563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.825593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.825951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.825982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.826341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.826373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.826733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.826763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.827156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.827201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.665 [2024-11-19 11:00:31.827545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.665 [2024-11-19 11:00:31.827575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.665 qpair failed and we were unable to recover it. 00:32:52.939 [2024-11-19 11:00:31.827822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.939 [2024-11-19 11:00:31.827854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.939 qpair failed and we were unable to recover it. 00:32:52.939 [2024-11-19 11:00:31.828086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.939 [2024-11-19 11:00:31.828117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.939 qpair failed and we were unable to recover it. 00:32:52.939 [2024-11-19 11:00:31.828385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.939 [2024-11-19 11:00:31.828416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.939 qpair failed and we were unable to recover it. 00:32:52.939 [2024-11-19 11:00:31.828776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.939 [2024-11-19 11:00:31.828806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.939 qpair failed and we were unable to recover it. 00:32:52.939 [2024-11-19 11:00:31.829184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.939 [2024-11-19 11:00:31.829216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.939 qpair failed and we were unable to recover it. 00:32:52.939 [2024-11-19 11:00:31.829558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.829590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.829952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.829984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.830390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.830423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.830773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.830806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.831177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.831209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.831567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.831597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.831964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.831997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.832402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.832434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.832795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.832824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.833191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.833223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.833579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.833611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.833971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.834002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.834368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.834400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.834752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.834783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.835138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.835181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.835531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.835561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.835925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.835956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.836306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.836337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.836583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.836612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.836951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.836990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.837309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.837341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.837691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.837721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.838080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.838109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.838502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.838533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.838887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.838918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.839283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.839316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.839669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.839700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.840057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.840090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.840425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.840456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.840811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.840843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.841198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.841230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.841587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.841617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.841977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.842007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.842375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.842407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.842759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.842788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.843122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.843152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.843528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.940 [2024-11-19 11:00:31.843558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.940 qpair failed and we were unable to recover it. 00:32:52.940 [2024-11-19 11:00:31.843924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.843957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.844337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.844369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.844718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.844749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.845115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.845146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.845527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.845558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.845906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.845936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.846288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.846320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.846689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.846720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.847080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.847110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.847505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.847537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.847897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.847927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.848195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.848227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.848579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.848612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.848971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.849003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.849386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.849417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.849774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.849804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.850178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.850211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.850564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.850594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.850950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.850980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.851331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.851362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.851715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.851746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.852109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.852139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.852535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.852572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.852918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.852951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.853308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.853342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.853694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.853724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.854074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.854104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.854488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.854523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.854868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.854900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.855258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.855289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.855528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.855562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.855911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.855942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.856298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.856330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.856675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.856705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.857055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.857086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.857441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.857474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.857835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.857866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.941 qpair failed and we were unable to recover it. 00:32:52.941 [2024-11-19 11:00:31.858212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.941 [2024-11-19 11:00:31.858243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.858593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.858625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.858976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.859008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.859388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.859421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.859796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.859827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.860182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.860213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.860573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.860604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.860836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.860871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.861235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.861267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.861624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.861656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.862017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.862047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.862371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.862402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.862764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.862794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.863046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.863076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.863424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.863456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.863819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.863850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.864212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.864243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.864623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.864654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.865012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.865041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.865398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.865430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.865778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.865808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.866179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.866211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.866572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.866602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.866955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.866986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.867343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.867374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.867744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.867781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.868125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.868169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.868545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.868576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.868931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.868962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.869320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.869352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.869710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.869739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.870102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.870133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.870506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.870538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.870898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.870930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.871294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.871327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.871684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.871714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.872073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.872103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.942 [2024-11-19 11:00:31.872507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.942 [2024-11-19 11:00:31.872542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.942 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.872900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.872931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.873305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.873339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.873698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.873728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.874093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.874122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.874521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.874552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.874903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.874933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.875290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.875322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.875688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.875719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.876153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.876193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.876579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.876610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.876969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.877001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.877354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.877386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.877732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.877764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.878124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.878154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.878519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.878552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.878909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.878939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.879299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.879331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.879694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.879723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.880097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.880130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.880479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.880512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.880768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.880797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.881155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.881206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.881553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.881585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.881936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.881966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.882322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.882355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.882713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.882742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.883111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.883143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.883507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.883544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.883898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.883928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.884354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.884385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.884737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.884767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.885197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.885230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.885598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.885630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.885987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.886016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.886387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.886419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.886777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.886807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.887179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.887211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.943 [2024-11-19 11:00:31.887566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.943 [2024-11-19 11:00:31.887596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.943 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.887962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.887994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.888351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.888382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.888732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.888761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.889007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.889042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.889395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.889426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.889781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.889811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.890179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.890212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.890566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.890596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.890954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.890984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.891341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.891373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.891820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.891851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.892200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.892233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.892597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.892627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.892980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.893009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.893376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.893408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.893760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.893790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.894141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.894182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.894534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.894565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.894918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.894947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.895309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.895340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.895706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.895737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.896103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.896135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.896484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.896516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.896885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.896917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.897272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.897303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.897664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.897694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.898055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.898085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.898435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.898466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.898722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.898753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.899099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.899128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.899510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.944 [2024-11-19 11:00:31.899542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.944 qpair failed and we were unable to recover it. 00:32:52.944 [2024-11-19 11:00:31.899908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.899938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.900297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.900329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.900691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.900721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.901082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.901114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.901525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.901557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.901915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.901947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.902308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.902341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.902699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.902728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.903089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.903119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.903370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.903405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.903754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.903785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.904134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.904177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.904542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.904574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.904929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.904958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.905324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.905357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.905704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.905733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.906093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.906123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.906476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.906509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.906760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.906790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.907145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.907185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.907532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.907566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.907924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.907955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.908177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.908210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.908590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.908621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.909039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.909071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.909426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.909464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.909816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.909846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.910205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.910236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.910602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.910634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.911011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.911041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.911411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.911442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.911792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.911822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.912126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.912155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.912495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.912526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.912762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.912794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.913031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.913063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.913423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-11-19 11:00:31.913456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.945 qpair failed and we were unable to recover it. 00:32:52.945 [2024-11-19 11:00:31.913801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.913832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.914189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.914220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.914579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.914610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.914974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.915004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.915383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.915413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.915748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.915778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.916131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.916170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.916537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.916566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.916923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.916953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.917201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.917233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.917582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.917615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.917972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.918005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.918251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.918283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.918637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.918668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.919022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.919055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.919411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.919443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.919791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.919821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.920225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.920257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.920611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.920644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.921007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.921037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.921390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.921423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.921776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.921807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.922179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.922211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.922609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.922640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.922871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.922904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.923251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.923282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.923643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.923672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.924043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.924073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.924436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.924475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.924803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.924833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.925191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.925223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.925572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.925602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.926034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.926066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.926418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.926450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.926809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.926839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.927198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.927230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.927523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.927553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.946 [2024-11-19 11:00:31.927905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-11-19 11:00:31.927934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.946 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.928291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.928322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.928691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.928721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.929073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.929103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.929467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.929499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.929851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.929881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.930243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.930274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.930636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.930666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.931098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.931128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.931483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.931514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.931873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.931903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.932241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.932272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.932638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.932668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.933026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.933057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.933415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.933446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.933792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.933822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.934181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.934212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.934567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.934597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.934959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.934990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.935347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.935378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.935721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.935752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.936103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.936134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.936513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.936545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.936900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.936930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.937280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.937313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.937639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.937670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.938011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.938042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.938412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.938446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.938803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.938833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.939195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.939228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.939502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.939534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.939919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.939956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.940184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.940218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.940576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.940606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.940975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.941005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.941343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.941375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.941730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.941761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.942153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.942200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.947 qpair failed and we were unable to recover it. 00:32:52.947 [2024-11-19 11:00:31.942574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-11-19 11:00:31.942605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.942945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.942975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.943205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.943237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.943607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.943636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.943996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.944026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.944265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.944298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.944570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.944600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.944958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.944989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.945348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.945380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.945739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.945768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.946128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.946167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.946520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.946550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.946952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.946982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.947332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.947363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.947726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.947757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.948112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.948142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.948504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.948535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.948894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.948925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.949277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.949309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.949663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.949694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.950108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.950140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.950516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.950548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.950912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.950944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.951281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.951315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.951666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.951697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.952049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.952081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.952450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.952481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.952839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.952869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.953112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.953143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.953515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.953546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.953896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.953927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.954178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.954210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.954596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.954626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.954992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.955028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.955384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.955416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.955779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.955809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.956178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.956209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.956582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-11-19 11:00:31.956613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.948 qpair failed and we were unable to recover it. 00:32:52.948 [2024-11-19 11:00:31.956968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.956999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.957348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.957381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.957730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.957760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.958120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.958151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.958533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.958564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.958807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.958839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.959206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.959238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.959592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.959625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.959847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.959877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.960233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.960267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.960641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.960673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.961038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.961067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.961422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.961453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.961805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.961835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.962184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.962216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.962588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.962620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.962866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.962897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.963258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.963290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.963649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.963679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.964036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.964066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.964439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.964470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.964819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.964849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.965210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.965242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.965616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.965647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.966057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.966087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.966449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.966481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.966916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.966947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.967283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.967314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.967684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.967715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.968077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.968109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.968497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.968530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.949 [2024-11-19 11:00:31.968875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.949 [2024-11-19 11:00:31.968905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.949 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.969051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.969085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.969473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.969506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.969741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.969773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.970127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.970188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.970554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.970585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.970941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.970971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.971234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.971266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.971640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.971670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.972116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.972145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.972489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.972520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.972878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.972910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.973290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.973322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.973709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.973741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.974091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.974123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.974506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.974539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.974861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.974891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.975253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.975284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.975648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.975680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.975917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.975950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.976213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.976244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.976612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.976643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.977005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.977035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.977403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.977433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.977796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.977826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.978185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.978219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.978535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.978567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.978935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.978966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.979325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.979357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.979635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.979664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.980012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.980042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.980281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.980314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.980703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.980734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.981097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.981130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.981484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.981516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.981773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.981804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.982113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.982144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.982509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.950 [2024-11-19 11:00:31.982541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.950 qpair failed and we were unable to recover it. 00:32:52.950 [2024-11-19 11:00:31.982900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.982931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.983283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.983317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.983666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.983696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.984046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.984077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.984421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.984452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.984810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.984840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.985080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.985120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.985487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.985519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.985880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.985911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.986277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.986308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.986579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.986610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.986971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.987003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.987228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.987260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.987626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.987657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.988019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.988051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.988419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.988450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.988810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.988839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.989221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.989251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.989631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.989660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.990021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.990052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.990296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.990327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.990704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.990736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.991104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.991135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.991566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.991596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.991967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.991999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.992353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.992387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.992619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.992649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.993032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.993062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.993416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.993446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.993803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.993833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.994191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.994222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.994613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.994647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.995016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.995045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.995415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.995447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.995803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.995835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.996198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.996229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.996610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.951 [2024-11-19 11:00:31.996642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.951 qpair failed and we were unable to recover it. 00:32:52.951 [2024-11-19 11:00:31.997005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:31.997037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:31.997488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:31.997521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:31.997943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:31.997974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:31.998204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:31.998237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:31.998597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:31.998628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:31.999018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:31.999047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:31.999307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:31.999339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:31.999696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:31.999728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:31.999995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.000027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.000364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.000403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.000750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.000782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.001145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.001188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.001546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.001577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.001828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.001858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.002086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.002123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.002511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.002543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.002907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.002940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.003177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.003209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.003556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.003585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.003995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.004027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.004386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.004420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.004781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.004812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.005188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.005221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.005577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.005609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.005978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.006010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.006368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.006399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.006781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.006812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.007179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.007213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.007461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.007491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.007840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.007870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.008069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.008103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.008524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.008556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.008981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.009012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.009269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.009302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.009666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.009695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.009949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.952 [2024-11-19 11:00:32.009979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.952 qpair failed and we were unable to recover it. 00:32:52.952 [2024-11-19 11:00:32.010209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.010244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.010596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.010626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.010993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.011025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.011250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.011282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.011686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.011717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.012057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.012088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.012449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.012483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.012811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.012842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.013094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.013124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.013417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.013449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.013671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.013702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.014125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.014172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.014548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.014580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.014931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.014967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.015321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.015353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.015732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.015763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.016127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.016156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.016527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.016559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.016891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.016923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.017315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.017348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.017575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.017605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.017969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.017999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.018370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.018402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.018738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.018769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.019197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.019230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.019590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.019622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.019848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.019878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.020263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.020296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.020669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.020699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.021053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.021086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.021306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.021337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.021697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.021729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.022085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.953 [2024-11-19 11:00:32.022116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.953 qpair failed and we were unable to recover it. 00:32:52.953 [2024-11-19 11:00:32.022474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.022506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.022866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.022896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.023268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.023300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.023669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.023699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.024061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.024093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.024539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.024570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.024792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.024821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.025236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.025270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.025551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.025581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.025930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.025961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.026304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.026335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.026698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.026732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.027087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.027120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.027505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.027536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.027790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.027824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.028182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.028217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.028574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.028604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.028992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.029023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.029392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.029426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.029789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.029820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.030186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.030227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.030613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.030645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.031001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.031033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.031347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.031380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.031759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.031792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.032150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.032203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.032581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.032615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.033037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.033069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.033441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.033475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.033832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.033862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.034224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.034257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.034588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.034618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.034956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.034988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.035343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.035374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.035723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.035754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.036114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.036144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.036514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.036545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.954 qpair failed and we were unable to recover it. 00:32:52.954 [2024-11-19 11:00:32.036783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.954 [2024-11-19 11:00:32.036816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.037216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.037248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.037604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.037637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.037993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.038024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.038386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.038420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.038783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.038813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.039189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.039223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.039583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.039615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.039859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.039889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.040198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.040230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.040580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.040613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.040962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.040992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.041388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.041422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.041779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.041810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.042181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.042214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.042569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.042602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.042966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.042996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.043338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.043372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.043733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.043763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.044004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.044034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.044412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.044443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.044807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.044838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.045195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.045229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.045612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.045649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.046003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.046034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.046278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.046310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.046683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.046714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.047071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.047103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.047444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.047477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.047838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.047870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.048228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.048261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.048639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.048669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.049027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.049057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.049418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.049449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.049681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.049714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.050071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.050103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.050466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.955 [2024-11-19 11:00:32.050498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.955 qpair failed and we were unable to recover it. 00:32:52.955 [2024-11-19 11:00:32.050752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.050782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.051136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.051176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.051537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.051568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.051924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.051957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.052308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.052341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.052707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.052740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.053129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.053172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.053554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.053585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.053948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.053981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.054337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.054369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.054730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.054762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.055115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.055152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.055567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.055597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.055960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.055992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.056339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.056374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.056746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.056776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.057137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.057180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.057513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.057545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.057898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.057928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.058299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.058333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.058688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.058722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.059083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.059116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.059437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.059471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.059812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.059842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.060200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.060233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.060583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.060614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.060971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.061008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.061341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.061373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.061734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.061766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.062114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.062146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.062535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.062566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.062929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.062960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.063335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.063367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.063728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.063759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.064137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.064186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.064597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.064628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.064987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.956 [2024-11-19 11:00:32.065018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.956 qpair failed and we were unable to recover it. 00:32:52.956 [2024-11-19 11:00:32.065387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.065420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.065787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.065817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.066177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.066211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.066569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.066600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.066949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.066981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.067339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.067372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.067738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.067769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.068130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.068194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.068549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.068581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.068950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.068982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.069338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.069370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.069726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.069757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.071590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.071655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.072092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.072127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.072524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.072559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.072891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.072924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.073210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.073248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.073591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.073625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.073984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.074015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.074348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.074382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.074727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.074758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.075122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.075153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.075519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.075551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.075907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.075939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.076302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.076335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.076689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.076721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.077083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.077115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.077510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.077542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.077889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.077922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.078282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.078322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.078667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.078698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.079060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.079091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.079457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.079491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.079845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.079874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.080232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.080266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.080513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.080545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.080900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.080933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-11-19 11:00:32.081276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.957 [2024-11-19 11:00:32.081309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.081666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.081697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.082044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.082077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.082440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.082473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.082830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.082861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.083108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.083137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.083530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.083562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.083917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.083948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.084306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.084338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.084705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.084736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.085092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.085126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.085518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.085553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.085897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.085930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.086294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.086327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.086748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.086780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.087148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.087192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.087569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.087602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.087869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.087900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.088241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.088275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.088559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.088590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.089017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.089049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.089402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.089436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.089793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.089824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.090188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.090223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.090579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.090610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.090962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.090994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.091262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.091294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.091657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.091690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.092044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.092075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.092470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.092503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.092864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.092898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.093237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.093270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.093626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.958 [2024-11-19 11:00:32.093661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-11-19 11:00:32.094030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.094061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.094409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.094444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.094800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.094832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.095222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.095255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.095505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.095540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.095933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.095965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.096315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.096350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.096702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.096734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.097082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.097113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.097519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.097551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.097829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.097860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.098205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.098238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.098501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.098531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.098945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.098976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.099406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.099438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.099795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.099827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.100185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.100218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.100579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.100611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.100971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.101002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.101345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.101380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.101755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.101786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.102137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.102184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.102518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.102549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.102910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.102943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.103210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.103242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.103635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.103666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.104066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.104104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.104482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.104515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.104877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.104909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.105234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.105267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.105630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.105662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.106010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.106043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.106444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.106477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.106835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.106867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.107220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.107253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.107498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.107528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.107877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.107907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-11-19 11:00:32.108195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.959 [2024-11-19 11:00:32.108228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.108579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.108611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.108866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.108897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.109276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.109309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.109686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.109718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.110076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.110108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.110474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.110507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.110867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.110899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.111233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.111266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.111657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.111688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.112035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.112068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.112424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.112457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.112819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.112851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.113216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.113249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.113621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.113652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.113960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.113992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.114337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.114370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.114623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.114654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.114992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.115024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.115351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.115382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.115739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.115771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.116129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.116172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.116450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.116484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.116832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.116864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.117227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.117260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.117622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.117654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.118083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.118114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.118505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.118537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.118892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.118922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.119280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.119318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.119686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.119717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.120079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.120109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.120527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.120559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.120947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.120978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.121335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.121368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.121733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.121764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:52.960 qpair failed and we were unable to recover it. 00:32:52.960 [2024-11-19 11:00:32.122018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.960 [2024-11-19 11:00:32.122048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.236 qpair failed and we were unable to recover it. 00:32:53.236 [2024-11-19 11:00:32.122389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.236 [2024-11-19 11:00:32.122423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.236 qpair failed and we were unable to recover it. 00:32:53.236 [2024-11-19 11:00:32.122791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.236 [2024-11-19 11:00:32.122824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.236 qpair failed and we were unable to recover it. 00:32:53.236 [2024-11-19 11:00:32.123193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.236 [2024-11-19 11:00:32.123227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.236 qpair failed and we were unable to recover it. 00:32:53.236 [2024-11-19 11:00:32.123577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.236 [2024-11-19 11:00:32.123608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.236 qpair failed and we were unable to recover it. 00:32:53.236 [2024-11-19 11:00:32.123973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.236 [2024-11-19 11:00:32.124005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.236 qpair failed and we were unable to recover it. 00:32:53.236 [2024-11-19 11:00:32.124340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.236 [2024-11-19 11:00:32.124373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.236 qpair failed and we were unable to recover it. 00:32:53.236 [2024-11-19 11:00:32.124734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.236 [2024-11-19 11:00:32.124768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.236 qpair failed and we were unable to recover it. 00:32:53.236 [2024-11-19 11:00:32.125131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.236 [2024-11-19 11:00:32.125174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.236 qpair failed and we were unable to recover it. 00:32:53.236 [2024-11-19 11:00:32.125524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.236 [2024-11-19 11:00:32.125554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.236 qpair failed and we were unable to recover it. 00:32:53.236 [2024-11-19 11:00:32.125908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.236 [2024-11-19 11:00:32.125941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.236 qpair failed and we were unable to recover it. 00:32:53.236 [2024-11-19 11:00:32.126295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.236 [2024-11-19 11:00:32.126326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.236 qpair failed and we were unable to recover it. 00:32:53.236 [2024-11-19 11:00:32.126708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.236 [2024-11-19 11:00:32.126739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.236 qpair failed and we were unable to recover it. 00:32:53.236 [2024-11-19 11:00:32.127074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.236 [2024-11-19 11:00:32.127107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.236 qpair failed and we were unable to recover it. 00:32:53.236 [2024-11-19 11:00:32.127461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.236 [2024-11-19 11:00:32.127493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.127848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.127882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.128254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.128287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.128650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.128682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.129046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.129078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.129439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.129469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.129825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.129857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.130218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.130249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.130630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.130663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.131019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.131050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.131408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.131440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.131807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.131839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.132196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.132228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.132547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.132577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.132950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.132981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.133325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.133356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.133610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.133642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.134006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.134038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.134413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.134445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.134808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.134846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.135206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.135240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.135587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.135619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.135984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.136016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.136366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.136399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.136752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.136783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.137135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.137177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.137527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.137557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.137924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.137955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.138332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.138363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.138729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.138759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.139120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.139154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.139496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.139526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.139889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.139921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.140184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.140220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.140597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.140628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.141000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.141032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.141402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.141434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.141793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.237 [2024-11-19 11:00:32.141825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.237 qpair failed and we were unable to recover it. 00:32:53.237 [2024-11-19 11:00:32.142203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.142235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.142652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.142682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.143037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.143069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.143408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.143441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.143788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.143819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.144198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.144230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.144619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.144650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.145015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.145046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.145419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.145452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.145804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.145836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.146087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.146118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.146487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.146518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.146874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.146905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.147151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.147211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.147581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.147611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.147972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.148005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.148250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.148281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.148671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.148701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.149028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.149060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.149416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.149448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.149810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.149841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.150211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.150251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.150614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.150645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.151004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.151034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.151372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.151403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.151769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.151799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.152151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.152195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.152536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.152566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.152995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.153027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.153289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.153320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.153559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.153588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.153952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.153982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.154337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.154370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.154732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.154763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.155125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.155171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.155555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.155586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.155952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.155984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.238 qpair failed and we were unable to recover it. 00:32:53.238 [2024-11-19 11:00:32.156348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.238 [2024-11-19 11:00:32.156380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.156714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.156746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.157097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.157127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.157483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.157517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.157870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.157901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.158266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.158300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.158666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.158696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.159057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.159090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.159479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.159512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.159856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.159888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.160249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.160281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.160633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.160666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.161021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.161051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.161416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.161450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.161807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.161839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.162275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.162307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.162661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.162693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.163043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.163074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.163423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.163455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.163816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.163850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.164214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.164246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.164624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.164657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.165010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.165041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.165405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.165438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.165791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.165830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.166189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.166226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.166606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.166637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.166990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.167023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.167397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.167429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.167790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.167820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.168190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.168221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.168570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.168601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.168965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.168995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.169340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.169372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.169731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.169762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.170132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.239 [2024-11-19 11:00:32.170210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.239 qpair failed and we were unable to recover it. 00:32:53.239 [2024-11-19 11:00:32.170573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.170604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.170961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.170994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.171393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.171426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.171771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.171803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.172172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.172205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.172554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.172586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.172937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.172968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.173326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.173360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.173725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.173756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.174120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.174153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.174547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.174578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.174937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.174968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.175401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.175433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.175792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.175824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.176186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.176218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.176574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.176606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.176970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.177000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.177339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.177373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.177731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.177762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.178130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.178184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.178572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.178604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.178832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.178867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.179218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.179251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.179622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.179654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.179902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.179933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.180281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.180315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.180705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.180736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.181107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.181137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.181492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.181532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.181877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.181908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.182317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.182350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.182717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.182751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.183111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.183143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.183516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.183549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.183899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.183931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.184291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.184323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.184668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.184700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.240 qpair failed and we were unable to recover it. 00:32:53.240 [2024-11-19 11:00:32.185126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.240 [2024-11-19 11:00:32.185157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.185494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.185524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.185871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.185903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.186262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.186293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.186650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.186682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.187021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.187051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.187408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.187441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.187690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.187723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.188073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.188105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.188427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.188459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.188826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.188856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.189208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.189240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.189594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.189624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.189969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.190000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.190339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.190371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.190737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.190768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.191131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.191174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.191525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.191555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.191921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.191954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.192311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.192342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.192699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.192730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.193118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.193151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.193524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.193554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.193926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.193957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.194286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.194318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.194665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.194696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.195043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.195073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.195433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.195465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.195834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.195864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.196229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.196263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.241 [2024-11-19 11:00:32.196499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.241 [2024-11-19 11:00:32.196528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.241 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.196897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.196935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.197295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.197327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.197704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.197735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.198103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.198135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.198536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.198569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.198920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.198953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.199292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.199324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.199674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.199705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.200058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.200088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.200444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.200478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.200832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.200861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.201267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.201299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.201685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.201717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.202065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.202096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.202415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.202447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.202791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.202824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.203185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.203219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.203613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.203645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1213788 Killed "${NVMF_APP[@]}" "$@" 00:32:53.242 [2024-11-19 11:00:32.204048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.204080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.204447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.204480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 11:00:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:32:53.242 [2024-11-19 11:00:32.204837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.204867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 11:00:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:53.242 [2024-11-19 11:00:32.205227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.205260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 11:00:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:53.242 [2024-11-19 11:00:32.205634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.205664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 11:00:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:53.242 11:00:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:53.242 [2024-11-19 11:00:32.206029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.206063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.206399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.206436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.206785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.206817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.207185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.207219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.207599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.207630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.207985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.208017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.208343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.208375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.208629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.208659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.209067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.209097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.209515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.209548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.209896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.242 [2024-11-19 11:00:32.209929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.242 qpair failed and we were unable to recover it. 00:32:53.242 [2024-11-19 11:00:32.210283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.210316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.210681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.210713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.211086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.211117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.211439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.211472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.211714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.211744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.212098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.212130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.212504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.212537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.212782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.212813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.213176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.213210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.213585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.213616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.213984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.214014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.214354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.214387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 11:00:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1214802 00:32:53.243 11:00:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1214802 00:32:53.243 [2024-11-19 11:00:32.214759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.214791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 11:00:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:53.243 11:00:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1214802 ']' 00:32:53.243 [2024-11-19 11:00:32.215173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.215206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 11:00:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.243 [2024-11-19 11:00:32.215573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.215605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 11:00:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:53.243 [2024-11-19 11:00:32.215857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.215889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 11:00:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.243 11:00:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:53.243 [2024-11-19 11:00:32.216232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.216264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 11:00:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:53.243 [2024-11-19 11:00:32.216629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.216662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.217035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.217065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.217249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.217282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.217641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.217673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.217920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.217951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.218256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.218290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.218691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.218725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.218966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.218997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.219354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.219388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.219711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.219743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.220007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.220038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.220400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.220432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.220798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.220830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.221079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.221111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.221486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.221519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.221885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.243 [2024-11-19 11:00:32.221916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.243 qpair failed and we were unable to recover it. 00:32:53.243 [2024-11-19 11:00:32.222288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.222321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.222691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.222725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.223059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.223091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.223367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.223400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.223640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.223672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.224032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.224063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.224529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.224562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.224916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.224948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.225304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.225336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.225562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.225594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.225982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.226013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.226362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.226395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.226648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.226678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.227047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.227077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.227421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.227455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.227821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.227851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.228205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.228238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.228615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.228644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.229012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.229044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.229414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.229450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.229798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.229830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.230194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.230227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.230529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.230559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.230817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.230847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.231085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.231117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.231389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.231423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.231658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.231688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.232076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.232107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.232408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.232440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.232799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.232831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.233213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.233246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.233630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.233661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.234016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.234045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.234407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.234441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.234809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.234839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.235228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.235261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.244 [2024-11-19 11:00:32.235607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.244 [2024-11-19 11:00:32.235639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.244 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.235999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.236031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.236292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.236324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.236688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.236720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.237072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.237102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.237581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.237613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.237846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.237880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.238247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.238279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.238543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.238573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.238934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.238966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.239104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.239134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.239434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.239465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.239836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.239867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.240237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.240269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.240649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.240681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.241049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.241081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.241484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.241516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.241877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.241908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.242264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.242297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.242754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.242787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.243071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.243102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.243497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.243529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.243759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.243788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.244171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.244214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.244586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.244621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.244877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.244908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.245298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.245330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.245676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.245708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.246083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.246115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.246527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.246559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.246802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.246831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.247188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.247221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.247585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.247615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.247993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.248023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.248423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.245 [2024-11-19 11:00:32.248455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.245 qpair failed and we were unable to recover it. 00:32:53.245 [2024-11-19 11:00:32.248830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.248860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.249241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.249273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.249648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.249681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.250089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.250120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.250491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.250523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.250902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.250934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.251281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.251314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.251530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.251560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.251929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.251959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.252329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.252363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.252605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.252635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.253022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.253053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.253412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.253445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.253672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.253703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.254084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.254116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.254526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.254559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.254939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.254970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.255340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.255372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.255749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.255781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.256236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.256268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.256660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.256692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.257076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.257107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.257383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.247 [2024-11-19 11:00:32.257416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.247 qpair failed and we were unable to recover it. 00:32:53.247 [2024-11-19 11:00:32.257749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.248 [2024-11-19 11:00:32.257778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.248 qpair failed and we were unable to recover it. 00:32:53.248 [2024-11-19 11:00:32.258188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.248 [2024-11-19 11:00:32.258222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.248 qpair failed and we were unable to recover it. 00:32:53.248 [2024-11-19 11:00:32.258590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.248 [2024-11-19 11:00:32.258622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.248 qpair failed and we were unable to recover it. 00:32:53.248 [2024-11-19 11:00:32.258989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.248 [2024-11-19 11:00:32.259022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.248 qpair failed and we were unable to recover it. 00:32:53.248 [2024-11-19 11:00:32.259281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.248 [2024-11-19 11:00:32.259315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.248 qpair failed and we were unable to recover it. 00:32:53.248 [2024-11-19 11:00:32.259428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.248 [2024-11-19 11:00:32.259463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:53.248 qpair failed and we were unable to recover it. 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 [2024-11-19 11:00:32.260282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Read completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 Write completed with error (sct=0, sc=8) 00:32:53.248 starting I/O failed 00:32:53.248 [2024-11-19 11:00:32.261094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.248 [2024-11-19 11:00:32.261723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.248 [2024-11-19 11:00:32.261832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.248 qpair failed and we were unable to recover it. 00:32:53.248 [2024-11-19 11:00:32.262438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.248 [2024-11-19 11:00:32.262547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.248 qpair failed and we were unable to recover it. 00:32:53.248 [2024-11-19 11:00:32.262838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.248 [2024-11-19 11:00:32.262876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.248 qpair failed and we were unable to recover it. 00:32:53.248 [2024-11-19 11:00:32.263478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.248 [2024-11-19 11:00:32.263587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.248 qpair failed and we were unable to recover it. 00:32:53.248 [2024-11-19 11:00:32.264033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.248 [2024-11-19 11:00:32.264072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.248 qpair failed and we were unable to recover it. 00:32:53.248 [2024-11-19 11:00:32.264414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.248 [2024-11-19 11:00:32.264450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.248 qpair failed and we were unable to recover it. 00:32:53.248 [2024-11-19 11:00:32.264799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.248 [2024-11-19 11:00:32.264831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.248 qpair failed and we were unable to recover it. 00:32:53.248 [2024-11-19 11:00:32.265186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.248 [2024-11-19 11:00:32.265220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.248 qpair failed and we were unable to recover it. 00:32:53.248 [2024-11-19 11:00:32.265568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.248 [2024-11-19 11:00:32.265599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.248 qpair failed and we were unable to recover it. 00:32:53.248 [2024-11-19 11:00:32.265928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.248 [2024-11-19 11:00:32.265961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.266343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.266377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.266770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.266801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.267223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.267268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.267513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.267544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.267897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.267927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.268280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.268315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.268679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.268710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.269062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.269096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.269473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.269506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.269890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.269922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.270205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.270237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.270562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.270593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.270956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.270987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.271343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.271375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.271617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.271646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.271997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.272029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.272392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.272426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.272693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.272723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.273080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.273112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.273595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.273627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.273985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.273979] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:32:53.249 [2024-11-19 11:00:32.274018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 [2024-11-19 11:00:32.274040] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-typeqpair failed and we were unable to recover it. 00:32:53.249 =auto ] 00:32:53.249 [2024-11-19 11:00:32.274245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.274278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.274608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.274637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.275029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.275058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.275408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.275442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.275802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.275834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.276064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.276096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.276480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.276512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.276782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.276828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.277187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.277219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.277556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.277590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.277965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.277998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.278350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.278384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.278741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.278774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.279147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.279189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.279575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.249 [2024-11-19 11:00:32.279607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.249 qpair failed and we were unable to recover it. 00:32:53.249 [2024-11-19 11:00:32.279978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.280011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.280373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.280406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.280821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.280853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.281264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.281297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.281720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.281751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.281974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.282006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.282383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.282415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.282780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.282812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.283175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.283209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.283585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.283616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.283981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.284013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.284400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.284433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.284689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.284720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.285069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.285101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.285453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.285486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.285737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.285774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.286119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.286151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.286542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.286573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.286694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.286723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.287097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.287134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.287493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.287527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.287880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.287911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.288274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.288307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.288658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.288688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.289055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.289087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.289450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.289481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.289745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.289774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.290134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.290175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.290534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.290565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.290924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.290955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.291325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.291358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.291711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.291740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.292110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.292142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.292511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.292544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.292871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.292901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.293260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.293295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.293662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.293694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.294043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.250 [2024-11-19 11:00:32.294074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.250 qpair failed and we were unable to recover it. 00:32:53.250 [2024-11-19 11:00:32.294293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.294326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.294688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.294719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.295095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.295127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.295402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.295434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.295788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.295818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.296090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.296123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.296519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.296552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.296910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.296941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.297302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.297337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.297699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.297730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.298086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.298117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.298477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.298510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.298867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.298899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.299250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.299282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.299661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.299692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.300060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.300090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.300468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.300499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.300855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.300885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.301275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.301309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.301687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.301718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.301983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.302011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.302342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.302374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.302738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.302770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.303118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.303149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.303498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.303528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.303892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.303923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.304271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.304303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.304672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.304703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.305045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.305077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.305333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.305365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.305588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.305618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.305964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.305994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.306330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.306362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.306726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.306759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.307127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.307156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.307491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.307523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.251 qpair failed and we were unable to recover it. 00:32:53.251 [2024-11-19 11:00:32.307873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.251 [2024-11-19 11:00:32.307905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.308269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.308307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.308672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.308703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.309053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.309084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.309494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.309525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.309878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.309908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.310263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.310296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.310673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.310704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.311059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.311089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.311464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.311494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.311762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.311793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.312140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.312178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.312433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.312463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.312813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.312849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.313223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.313258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.313611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.313641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.314009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.314040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.314289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.314321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.314704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.314735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.315097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.315129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.315396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.315428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.315794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.315826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.316091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.316122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.316394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.316427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.316668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.316701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.317045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.317075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.317406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.317439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.317768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.317799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.318151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.318206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.318550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.318582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.318928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.318961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.319205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.319240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.319601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.319632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.320009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.252 [2024-11-19 11:00:32.320040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.252 qpair failed and we were unable to recover it. 00:32:53.252 [2024-11-19 11:00:32.320391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.320424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.320784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.320814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.321185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.321219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.321468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.321498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.321749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.321778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.322142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.322186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.322571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.322608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.322953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.322985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.323260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.323293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.323637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.323669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.323907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.323937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.324280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.324311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.324631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.324663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.324987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.325017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.325358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.325390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.325639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.325669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.325994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.326025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.326276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.326308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.326682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.326713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.327073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.327105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.327504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.327538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.327902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.327933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.328281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.328315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.328694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.328724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.329091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.329121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.329497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.329530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.329873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.329903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.330236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.330268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.330608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.330638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.330992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.331024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.331263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.331294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.331665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.331696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.331965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.331998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.332338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.332370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.332666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.332697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.333049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.333080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.333335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.333366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.333731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.253 [2024-11-19 11:00:32.333762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.253 qpair failed and we were unable to recover it. 00:32:53.253 [2024-11-19 11:00:32.334102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.334133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.334519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.334549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.334915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.334947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.335310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.335343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.335709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.335740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.336086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.336118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.336509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.336541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.336885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.336917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.337258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.337291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.337671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.337703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.338077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.338108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.338494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.338527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.338885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.338916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.339285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.339316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.339704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.339734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.340101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.340133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.340511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.340545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.340895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.340926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.341299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.341332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.341674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.341703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.342053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.342084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.342455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.342487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.342849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.342880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.343247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.343279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.343637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.343669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.344030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.344062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.344289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.344321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.344548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.344582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.344928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.344959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.345304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.345338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.345691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.345721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.345974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.346004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.346325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.346357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.346717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.346748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.347077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.347107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.347477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.347509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.347852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.347889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.348245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.254 [2024-11-19 11:00:32.348278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.254 qpair failed and we were unable to recover it. 00:32:53.254 [2024-11-19 11:00:32.348487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.348517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.348755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.348785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.348998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.349029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.349365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.349397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.349756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.349788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.350144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.350185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.350563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.350598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.350960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.350991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.351337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.351370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.351623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.351654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.351965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.351995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.352218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.352249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.352625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.352660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.353012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.353043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.353491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.353524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.353877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.353909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.354289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.354321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.354685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.354718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.355055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.355087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.355468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.355501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.355849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.355885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.356242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.356275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.356490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.356519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.356874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.356904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.357237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.357271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.357642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.357680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.358024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.358055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.358420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.358453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.358810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.358842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.359212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.359243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.359608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.359640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.360034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.360064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.360397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.360428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.360783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.360813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.361171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.361205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.361524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:53.255 [2024-11-19 11:00:32.361595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.361625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.361853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.361883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.255 [2024-11-19 11:00:32.362282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.255 [2024-11-19 11:00:32.362315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.255 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.362670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.362702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.363065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.363096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.363472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.363506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.363852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.363884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.364235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.364267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.364639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.364670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.365028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.365059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.365416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.365447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.365681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.365711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.366070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.366101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.366472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.366504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.366859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.366891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.367147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.367193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.367567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.367597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.367946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.367979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.368334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.368367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.368717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.368749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.368994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.369024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.369411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.369444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.369783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.369816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.370202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.370235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.370595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.370631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.370974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.371009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.371258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.371292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.371542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.371578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.371931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.371962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.372317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.372349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.372714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.372745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.373111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.373143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.373447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.373479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.373827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.373859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.374214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.374248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.256 qpair failed and we were unable to recover it. 00:32:53.256 [2024-11-19 11:00:32.374605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.256 [2024-11-19 11:00:32.374637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.375012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.375046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.375425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.375457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.375832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.375865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.376216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.376250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.376624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.376656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.376895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.376926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.377295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.377327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.377710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.377742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.378087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.378127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.378526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.378559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.378918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.378949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.379182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.379217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.379583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.379614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.379870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.379910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.380265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.380297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.380657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.380688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.381051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.381082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.381466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.381499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.381857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.381889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.382228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.382260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.382621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.382653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.382887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.382918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.383289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.383323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.383670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.383701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.384050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.384082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.384434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.384466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.384817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.384849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.385209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.385241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.385609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.385640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.385878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.385912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.386254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.386286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.386539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.386568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.386929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.386958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.387307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.387340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.387687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.387718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.388067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.388104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.388507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.388540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.257 qpair failed and we were unable to recover it. 00:32:53.257 [2024-11-19 11:00:32.388882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.257 [2024-11-19 11:00:32.388912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.389277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.389310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.389674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.389703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.390070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.390101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.390465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.390497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.390863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.390892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.391236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.391269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.391638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.391669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.392027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.392058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.392410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.392440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.392796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.392829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.393190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.393224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.393619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.393650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.393999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.394031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.394413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.394444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.394787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.394818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.395179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.395212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.395569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.395600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.395832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.395863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.396220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.396251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.396592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.396626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.396948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.396979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.397327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.397360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.397727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.397757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.398115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.398147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.398531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.398563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.398839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.398870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.399233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.399267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.399594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.399626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.399997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.400028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.400374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.400406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.400817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.400849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.401193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.401226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.401595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.401626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.401982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.402013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.402368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.402399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.402761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.402791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.403148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.258 [2024-11-19 11:00:32.403189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.258 qpair failed and we were unable to recover it. 00:32:53.258 [2024-11-19 11:00:32.403516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.403545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.403907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.403937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.404296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.404330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.404667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.404698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.404920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.404950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.405308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.405341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.405693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.405726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.406092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.406123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.406529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.406561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.406914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.406945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.407317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.407349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.407662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.407693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.408048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.408077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.408452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.408487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.408840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.408870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.409240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.409273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.409649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.409681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.410017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.410047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.410414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.410447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.410796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.410825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.411179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.411210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.411569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.411600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.411964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.411994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.412340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.412371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.412614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.412647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.412884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.412914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.413267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.413298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.413673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.413705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.413947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.413988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.414384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.414416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.414769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.414802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.415172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.415203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.415420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.415450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.415671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.415702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.416071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.416102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.416488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.259 [2024-11-19 11:00:32.416520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.259 qpair failed and we were unable to recover it. 00:32:53.259 [2024-11-19 11:00:32.416872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.535 [2024-11-19 11:00:32.416903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.535 qpair failed and we were unable to recover it. 00:32:53.535 [2024-11-19 11:00:32.417249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.535 [2024-11-19 11:00:32.417284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.535 qpair failed and we were unable to recover it. 00:32:53.535 [2024-11-19 11:00:32.417627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.535 [2024-11-19 11:00:32.417661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.535 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.418033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.418062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.418411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.418443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.418799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.418831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.419092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.419122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.419494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.419527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.419883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.419917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.420273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.420305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.420652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.420684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.421047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.421082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.421342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.421378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.421752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.421785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.422036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.422068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.422321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.422354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.422754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.422786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.422916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:53.536 [2024-11-19 11:00:32.422978] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:53.536 [2024-11-19 11:00:32.422989] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:53.536 [2024-11-19 11:00:32.422999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:53.536 [2024-11-19 11:00:32.423006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:53.536 [2024-11-19 11:00:32.423149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.423197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.423513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.423546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.423901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.423932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.424284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.424317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.424570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.424602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.424946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.424978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.425345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.425378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.425735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.425768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.425679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:53.536 [2024-11-19 11:00:32.425872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:53.536 [2024-11-19 11:00:32.426009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:53.536 [2024-11-19 11:00:32.426015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:53.536 [2024-11-19 11:00:32.426147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.426185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.426531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.426561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.426921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.426953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.427326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.427358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.427722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.427752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.428109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.428141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.428383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.428414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.428799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.428830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.429196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.429230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.429594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.429624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.429881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.429911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.430255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.536 [2024-11-19 11:00:32.430286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.536 qpair failed and we were unable to recover it. 00:32:53.536 [2024-11-19 11:00:32.430664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.430695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.431061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.431093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.431462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.431496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.431759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.431789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.432138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.432183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.432477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.432507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.432837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.432875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.433140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.433183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.433546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.433576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.433980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.434014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.434394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.434427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.434798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.434830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.435179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.435210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.435562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.435594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.435964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.435996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.436321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.436353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.436712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.436743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.437102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.437132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.437512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.437543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.437797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.437826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.438195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.438228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.438489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.438521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.438866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.438897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.439255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.439288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.439427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.439462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.439599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.439630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.439982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.440012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.440266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.440298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.440630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.440663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.440898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.440929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.441278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.441311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.441636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.441669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.442014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.442044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.442399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.442439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.442677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.442708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.443071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.443104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.443463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.443496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.443862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.443893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.537 qpair failed and we were unable to recover it. 00:32:53.537 [2024-11-19 11:00:32.444310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.537 [2024-11-19 11:00:32.444343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.444697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.444728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.445082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.445115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.445381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.445414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.445759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.445790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.446147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.446191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.446410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.446441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.446717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.446747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.447001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.447033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.447393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.447426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.447758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.447789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.448092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.448125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.448504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.448536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.448796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.448827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.449189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.449221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.449583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.449614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.449952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.449985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.450310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.450343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.450694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.450727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.450979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.451010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.451351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.451384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.451611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.451643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.451986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.452023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.452351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.452384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.452726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.452757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.453114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.453146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.453533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.453565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.453900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.453931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.454284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.454316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.454640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.454672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.455015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.455045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.455413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.455446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.455803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.455835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.456185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.456219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.456564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.456597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.456945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.456976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.457342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.457376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.538 [2024-11-19 11:00:32.457599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.538 [2024-11-19 11:00:32.457630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.538 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.458008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.458038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.458414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.458447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.458788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.458819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.459044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.459074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.459442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.459477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.459818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.459851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.460204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.460237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.460550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.460583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.460935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.460966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.461342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.461374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.461627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.461660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.461900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.461932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.462280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.462312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.462680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.462711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.463086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.463117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.463518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.463550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.463890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.463922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.464286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.464318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.464673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.464704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.465066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.465097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.465455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.465486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.465738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.465768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.466140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.466177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.466304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.466333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.466695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.466725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.467075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.467112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.467501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.467533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.467778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.467807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.468151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.468197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.468613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.468643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.469030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.469061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.469486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.469519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.469915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.469946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.470296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.470326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.470614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.470643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.470876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.470906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.471202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.539 [2024-11-19 11:00:32.471232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.539 qpair failed and we were unable to recover it. 00:32:53.539 [2024-11-19 11:00:32.471477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.471510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.471764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.471795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.472032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.472062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.472475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.472505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.472723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.472755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.473109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.473141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.473405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.473436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.473804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.473833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.474202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.474237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.474552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.474582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.474950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.474982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.475193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.475226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.475591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.475621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.475974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.476003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.476225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.476256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.476601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.476638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.476995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.477026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.477412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.477443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.477663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.477693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.477926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.477957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.478339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.478371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.478736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.478767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.479127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.479165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.479564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.479594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.479976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.480007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.480252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.480283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.480643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.480673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.480891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.480921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.481337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.481368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.481701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.481733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.482074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.482106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.482477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.482509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.482875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.540 [2024-11-19 11:00:32.482905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.540 qpair failed and we were unable to recover it. 00:32:53.540 [2024-11-19 11:00:32.483264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.483296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.483538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.483568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.483928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.483957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.484189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.484219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.484588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.484619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.484864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.484893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.485273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.485304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.485677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.485707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.485937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.485966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.486342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.486379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.486736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.486768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.487180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.487214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.487562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.487592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.487819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.487848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.488186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.488218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.488467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.488498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.488871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.488900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.489153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.489191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.489545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.489575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.489948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.489979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.490194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.490227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.490486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.490516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.490735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.490767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.491114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.491145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.491527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.491558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.491913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.491942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.492172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.492204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.492482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.492513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.492716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.492745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.493115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.493145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.493420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.493449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.493719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.493749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.494178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.494210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.494416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.494445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.494793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.494823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.495198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.495230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.495434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.495464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.495842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.541 [2024-11-19 11:00:32.495874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.541 qpair failed and we were unable to recover it. 00:32:53.541 [2024-11-19 11:00:32.496109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.496143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.496541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.496572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.496914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.496944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.497296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.497329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.497541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.497571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.497944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.497975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.498371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.498401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.498741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.498770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.499138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.499179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.499525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.499554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.499922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.499952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.500185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.500216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.500648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.500678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.501041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.501071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.501409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.501441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.501655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.501685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.502029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.502061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.502412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.502444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.502686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.502715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.503093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.503123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.503487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.503519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.503894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.503925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.504291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.504322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.504531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.504561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.504911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.504943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.505311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.505342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.505693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.505724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.506082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.506113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.506467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.506498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.506723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.506754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.506854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.506883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.507127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.507171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.507575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.507605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.507860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.507890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.508228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.508259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.508496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.508526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.508853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.508883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.509223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.542 [2024-11-19 11:00:32.509257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.542 qpair failed and we were unable to recover it. 00:32:53.542 [2024-11-19 11:00:32.509630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.509659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.510034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.510070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.510427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.510459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.510796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.510826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.511185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.511218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.511579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.511609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.511970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.512000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.512250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.512284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.512488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.512517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.512861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.512891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.513107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.513137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.513512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.513542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.513898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.513928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.514175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.514207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.514534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.514563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.514803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.514833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.515072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.515102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.515456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.515488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.515857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.515888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.516237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.516269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.516671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.516701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.517048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.517079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.517301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.517335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.517577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.517607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.517815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.517844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.518051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.518083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.518456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.518487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.518834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.518865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.519211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.519250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.519627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.519658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.520017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.520049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.520379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.520417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.520787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.520817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.521194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.521226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.521610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.521642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.521869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.521899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.522274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.522307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.522665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.522695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.543 [2024-11-19 11:00:32.523039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.543 [2024-11-19 11:00:32.523071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.543 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.523466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.523497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.523861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.523892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.524278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.524311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.524625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.524654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.524896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.524926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.525287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.525320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.525677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.525708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.525937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.525967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.526336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.526368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.526742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.526773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.527119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.527152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.527494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.527525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.527878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.527909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.528242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.528274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.528638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.528671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.529018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.529048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.529409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.529448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.529807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.529838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.530196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.530228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.530445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.530475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.530814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.530844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.531191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.531223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.531587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.531618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.531970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.532000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.532343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.532374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.532749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.532779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.533139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.533178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.533549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.533580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.533929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.533959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.534304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.534339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.534661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.534692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.535018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.535050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.535387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.535420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.535771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.535802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.536031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.536061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.536395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.536427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.536806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.536837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.544 [2024-11-19 11:00:32.537077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.544 [2024-11-19 11:00:32.537107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.544 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.537495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.537528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.537891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.537922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.538280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.538312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.538675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.538706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.539079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.539110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.539521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.539552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.539914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.539944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.540198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.540229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.540578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.540609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.540964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.540994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.541220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.541251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.541462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.541492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.541864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.541893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.542126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.542156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.542528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.542558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.542770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.542799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.543152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.543203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.543592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.543623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.543978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.544011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.544333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.544372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.544741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.544771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.545135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.545174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.545521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.545552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.545936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.545966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.546322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.546356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.546710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.546740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.546956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.546985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.547338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.547369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.547714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.547746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.548101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.548132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.548395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.548426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.548796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.548826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.549192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.549227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.545 [2024-11-19 11:00:32.549595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.545 [2024-11-19 11:00:32.549626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.545 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.549988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.550018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.550351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.550382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.550743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.550772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.551124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.551155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.551566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.551598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.551949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.551980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.552347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.552379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.552624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.552655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.553003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.553035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.553409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.553441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.553788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.553819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.554187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.554219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.554571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.554607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.554962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.554994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.555340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.555371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.555745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.555776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.556142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.556193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.556533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.556567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.556929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.556958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.557312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.557347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.557573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.557605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.557952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.557986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.558341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.558373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.558742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.558772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.559118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.559150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.559526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.559556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.559774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.559803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.560174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.560206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.560545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.560577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.560949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.560980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.561338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.561373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.561722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.561754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.562124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.562154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.562537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.562570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.562920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.562953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.563304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.563335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.563695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.563726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.563933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.546 [2024-11-19 11:00:32.563962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.546 qpair failed and we were unable to recover it. 00:32:53.546 [2024-11-19 11:00:32.564326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.564360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.564707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.564743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.565100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.565132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.565358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.565388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.565749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.565782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.566047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.566081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.566460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.566494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.566858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.566889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.567147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.567192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.567517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.567548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.567761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.567790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.568179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.568212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.568574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.568607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.568815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.568846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.569218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.569250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.569602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.569634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.569863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.569895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.570242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.570275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.570653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.570685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.570902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.570934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.571300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.571331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.571579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.571611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.571826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.571857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.572214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.572245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.572475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.572505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.572849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.572881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.573228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.573259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.573636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.573668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.574037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.574067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.574446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.574478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.574851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.574881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.575240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.575274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.575509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.575542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.575884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.575916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.576141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.576201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.576429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.576459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.576826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.576855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.577214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.547 [2024-11-19 11:00:32.577248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.547 qpair failed and we were unable to recover it. 00:32:53.547 [2024-11-19 11:00:32.577620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.577650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.577872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.577902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.578116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.578146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.578497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.578528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.578885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.578918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.579263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.579296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.579521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.579550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.579763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.579793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.580176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.580207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.580576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.580606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.580967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.580997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.581369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.581401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.581756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.581788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.582177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.582211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.582564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.582597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.582960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.582991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.583297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.583329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.583548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.583578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.583920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.583950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.584191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.584225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.584575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.584606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.584965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.584995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.585332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.585363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.585712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.585742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.586103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.586134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.586533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.586566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.586939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.586971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.587329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.587360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.587571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.587600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.587969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.587999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.588365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.588397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.588639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.588676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.588898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.588934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.589265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.589297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.589510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.589539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.589797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.589831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.590199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.590233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.590579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.590610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.548 qpair failed and we were unable to recover it. 00:32:53.548 [2024-11-19 11:00:32.590965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.548 [2024-11-19 11:00:32.590996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.591372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.591403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.591748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.591778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.592141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.592181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.592522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.592552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.592921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.592951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.593300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.593333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.593600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.593636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.593993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.594025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.594402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.594434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.594779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.594812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.595033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.595066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.595419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.595451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.595815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.595847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.596210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.596241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.596357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.596388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.596665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.596695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.597061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.597091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.597449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.597483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.597831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.597862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.598085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.598121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.598505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.598538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.598904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.598935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.599290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.599327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.599701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.599733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.599827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.599855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.600057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.600086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.600458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.600490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.600870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.600902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.601277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.601311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.601553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.601585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.601943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.601974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.602341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.602372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.602762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.602792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.603176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.549 [2024-11-19 11:00:32.603208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.549 qpair failed and we were unable to recover it. 00:32:53.549 [2024-11-19 11:00:32.603549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.603579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.603954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.603985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.604192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.604224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.604542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.604572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.604926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.604958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.605323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.605353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.605716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.605746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.606133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.606183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.606578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.606609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.606979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.607010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.607368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.607399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.607746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.607777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.608137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.608189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.608564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.608595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.608973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.609004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.609348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.609379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.609739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.609770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.610131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.610171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.610533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.610565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.610922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.610954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.611317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.611348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.611577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.611607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.611951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.611980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.612207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.612238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.612610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.612641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.613001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.613034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.613359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.613392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.613739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.613772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.614132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.614184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.614392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.614422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.614772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.614802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.615180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.615214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.615435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.615464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.615833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.615863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.616227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.616260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.616468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.616498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.616722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.616750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.550 qpair failed and we were unable to recover it. 00:32:53.550 [2024-11-19 11:00:32.617112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.550 [2024-11-19 11:00:32.617143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.617522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.617556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.617898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.617930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.618288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.618321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.618682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.618718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.619063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.619094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.619457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.619490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.619706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.619736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.620084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.620115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.620343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.620374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.620718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.620748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.621101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.621132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.621508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.621538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.621916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.621945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.622307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.622340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.622706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.622737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.623094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.623130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.623481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.623515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.623877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.623906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.624293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.624326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.624687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.624719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.625083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.625114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.625502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.625534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.625762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.625792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.626045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.626075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.626285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.626317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.626705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.626734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.626962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.626992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.627336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.627367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.627717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.627750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.628087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.628117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.628493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.628590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.628954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.628985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.629234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.629265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.629360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.629389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.629499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.629531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.629787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.629818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.630179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.630213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.551 [2024-11-19 11:00:32.630568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.551 [2024-11-19 11:00:32.630597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.551 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.630975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.631006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.631419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.631452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.631828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.631860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.632251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.632285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.632644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.632681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.633028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.633058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.633407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.633442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.633813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.633843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.634203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.634236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.634488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.634521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.634892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.634923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.635180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.635212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.635570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.635608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.635956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.635987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.636325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.636358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.636581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.636610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.636944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.636974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.637336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.637369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.637720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.637751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.638110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.638142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.638537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.638569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.638924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.638955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.639340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.639371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.639741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.639772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.640169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.640202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.640560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.640592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.640710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.640745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.641005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.641036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.641409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.641442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.641773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.641803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.642156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.642195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.642562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.642598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.642965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.642996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.643335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.643366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.643590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.643620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.643991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.644021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.644388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.644421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.552 qpair failed and we were unable to recover it. 00:32:53.552 [2024-11-19 11:00:32.644636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.552 [2024-11-19 11:00:32.644666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.644909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.644940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.645292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.645323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.645664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.645697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.645900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.645932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.646284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.646316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.646532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.646562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.646810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.646840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.647068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.647098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.647456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.647489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.647845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.647876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.648094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.648128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.648521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.648555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.648913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.648942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.649274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.649306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.649527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.649560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.649803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.649833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.650188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.650221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.650574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.650607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.650965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.650998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.651339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.651370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.651745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.651776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.652131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.652171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.652504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.652536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.652894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.652927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.653286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.653318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.653675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.653707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.653954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.653988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.654335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.654368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.654602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.654633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.654995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.655027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.655406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.655439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.655653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.655683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.656079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.656108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.656336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.656367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.656601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.656632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.656974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.657004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.657343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.657378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.553 [2024-11-19 11:00:32.657734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.553 [2024-11-19 11:00:32.657764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.553 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.658107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.658138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.658531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.658563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.658788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.658818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.659194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.659226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.659434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.659462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.659826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.659857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.660062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.660091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.660451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.660484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.660854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.660885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.661246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.661279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.661656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.661688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.662050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.662081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.662491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.662524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.662886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.662918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.663283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.663315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.663685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.663715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.664078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.664109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.664463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.664496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.664839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.664868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.665325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.665362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.665739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.665771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.665999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.666028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.666321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.666354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.666739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.666777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.666997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.667026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.667369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.667400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.667746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.667778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.668131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.668169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.668486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.668518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.668889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.668920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.669268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.669301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.669669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.669699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.670042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.670074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.554 qpair failed and we were unable to recover it. 00:32:53.554 [2024-11-19 11:00:32.670460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.554 [2024-11-19 11:00:32.670492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.670836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.670869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.671224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.671256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.671631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.671664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.672019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.672050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.672423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.672455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.672813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.672844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.673203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.673236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.673467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.673497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.673853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.673884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.674235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.674268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.674626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.674657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.675017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.675047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.675445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.675477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.675824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.675856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.676221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.676255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.676618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.676649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.677007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.677044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.677257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.677289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.677666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.677696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.678042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.678074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.678444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.678475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.678827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.678858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.679219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.679252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.679623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.679653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.680002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.680034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.680264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.680295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.680644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.680675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.681067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.681099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.681456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.681490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.681711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.681741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.682088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.682121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.682345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.682378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.682762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.682792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.683016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.683049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.683416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.683448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.683670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.683699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.555 [2024-11-19 11:00:32.684080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.555 [2024-11-19 11:00:32.684110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.555 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.684455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.684488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.684845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.684876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.685230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.685265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.685646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.685676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.685939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.685970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.686324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.686357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.686720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.686752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.687107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.687137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.687461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.687493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.687845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.687875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.688098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.688131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.688533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.688564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.688944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.688976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.689365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.689398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.689611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.689641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.690012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.690042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.690421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.690458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.690664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.690696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.691079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.691111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.691324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.691354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.691566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.691598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.691911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.691942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.692289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.692320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.692539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.692569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.692925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.692955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.693183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.693214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.693577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.693608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.693830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.693862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.694119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.694149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.694426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.694456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.694804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.694835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.695191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.695224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.695439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.695469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.695673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.695705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.696054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.696084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.696299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.696330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.696697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.696727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.556 [2024-11-19 11:00:32.696952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.556 [2024-11-19 11:00:32.696982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.556 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.697333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.697367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.697738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.697770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.698118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.698150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.698374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.698404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.698645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.698674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.699050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.699082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.699412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.699444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.699692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.699723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.700072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.700105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.700477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.700516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.700872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.700902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.701272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.701304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.701515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.701545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.701905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.701935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.702296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.702329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.702532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.702563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.702907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.702937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.703193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.703226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.703601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.703633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.703730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.703759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.704103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.704132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.704509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.704542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.704907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.704941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.705311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.705345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.705705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.705736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.706091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.706121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.706497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.706529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.706900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.706932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.707168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.707200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.707576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.707608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.707824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.707854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.708219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.708251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.708490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.708523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.708891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.708923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.709289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.709322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.709692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.709723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.709935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.709972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.710338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.557 [2024-11-19 11:00:32.710369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.557 qpair failed and we were unable to recover it. 00:32:53.557 [2024-11-19 11:00:32.710597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.558 [2024-11-19 11:00:32.710626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.558 qpair failed and we were unable to recover it. 00:32:53.558 [2024-11-19 11:00:32.710972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.558 [2024-11-19 11:00:32.711003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.558 qpair failed and we were unable to recover it. 00:32:53.558 [2024-11-19 11:00:32.711248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.558 [2024-11-19 11:00:32.711278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.558 qpair failed and we were unable to recover it. 00:32:53.558 [2024-11-19 11:00:32.711657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.558 [2024-11-19 11:00:32.711689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.558 qpair failed and we were unable to recover it. 00:32:53.558 [2024-11-19 11:00:32.712045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.558 [2024-11-19 11:00:32.712077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.558 qpair failed and we were unable to recover it. 00:32:53.558 [2024-11-19 11:00:32.712436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.558 [2024-11-19 11:00:32.712469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.558 qpair failed and we were unable to recover it. 00:32:53.558 [2024-11-19 11:00:32.712822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.558 [2024-11-19 11:00:32.712853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.558 qpair failed and we were unable to recover it. 00:32:53.558 [2024-11-19 11:00:32.713214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.558 [2024-11-19 11:00:32.713247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.558 qpair failed and we were unable to recover it. 00:32:53.558 [2024-11-19 11:00:32.713611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.558 [2024-11-19 11:00:32.713643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.558 qpair failed and we were unable to recover it. 00:32:53.558 [2024-11-19 11:00:32.714012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.558 [2024-11-19 11:00:32.714044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.558 qpair failed and we were unable to recover it. 00:32:53.558 [2024-11-19 11:00:32.714410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.558 [2024-11-19 11:00:32.714442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.558 qpair failed and we were unable to recover it. 00:32:53.558 [2024-11-19 11:00:32.714804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.558 [2024-11-19 11:00:32.714836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.558 qpair failed and we were unable to recover it. 00:32:53.558 [2024-11-19 11:00:32.715064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.558 [2024-11-19 11:00:32.715095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.558 qpair failed and we were unable to recover it. 00:32:53.558 [2024-11-19 11:00:32.715358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.558 [2024-11-19 11:00:32.715389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.558 qpair failed and we were unable to recover it. 00:32:53.839 [2024-11-19 11:00:32.715764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.839 [2024-11-19 11:00:32.715798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.839 qpair failed and we were unable to recover it. 00:32:53.839 [2024-11-19 11:00:32.716156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.839 [2024-11-19 11:00:32.716199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.839 qpair failed and we were unable to recover it. 00:32:53.839 [2024-11-19 11:00:32.716577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.839 [2024-11-19 11:00:32.716607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.839 qpair failed and we were unable to recover it. 00:32:53.839 [2024-11-19 11:00:32.716800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.839 [2024-11-19 11:00:32.716829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.839 qpair failed and we were unable to recover it. 00:32:53.839 [2024-11-19 11:00:32.717206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.839 [2024-11-19 11:00:32.717238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.839 qpair failed and we were unable to recover it. 00:32:53.839 [2024-11-19 11:00:32.717629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.839 [2024-11-19 11:00:32.717660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.839 qpair failed and we were unable to recover it. 00:32:53.839 [2024-11-19 11:00:32.717886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.839 [2024-11-19 11:00:32.717919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.839 qpair failed and we were unable to recover it. 00:32:53.839 [2024-11-19 11:00:32.718277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.839 [2024-11-19 11:00:32.718309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.839 qpair failed and we were unable to recover it. 00:32:53.839 [2024-11-19 11:00:32.718451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.839 [2024-11-19 11:00:32.718480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.839 qpair failed and we were unable to recover it. 00:32:53.839 [2024-11-19 11:00:32.718799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.839 [2024-11-19 11:00:32.718829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.839 qpair failed and we were unable to recover it. 00:32:53.839 [2024-11-19 11:00:32.719101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.839 [2024-11-19 11:00:32.719130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.839 qpair failed and we were unable to recover it. 00:32:53.839 [2024-11-19 11:00:32.719547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.839 [2024-11-19 11:00:32.719594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.839 qpair failed and we were unable to recover it. 00:32:53.839 [2024-11-19 11:00:32.719936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.839 [2024-11-19 11:00:32.719968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.839 qpair failed and we were unable to recover it. 00:32:53.839 [2024-11-19 11:00:32.720327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.839 [2024-11-19 11:00:32.720359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.839 qpair failed and we were unable to recover it. 00:32:53.839 [2024-11-19 11:00:32.720737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.839 [2024-11-19 11:00:32.720768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.839 qpair failed and we were unable to recover it. 00:32:53.839 [2024-11-19 11:00:32.721146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.839 [2024-11-19 11:00:32.721199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.839 qpair failed and we were unable to recover it. 00:32:53.839 [2024-11-19 11:00:32.721579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.839 [2024-11-19 11:00:32.721609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.721949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.721980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.722204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.722239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.722554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.722586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.722936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.722967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.723256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.723287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.723637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.723667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.723904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.723934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.724299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.724331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.724685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.724719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.724982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.725014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.725264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.725295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.725648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.725680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.726006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.726036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.726385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.726417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.726787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.726819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.727208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.727242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.727591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.727622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.727995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.728026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.728394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.728426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.728788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.728819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.729188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.729220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.729433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.729466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.729807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.729837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.730216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.730248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.730465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.730496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.730869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.840 [2024-11-19 11:00:32.730899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.840 qpair failed and we were unable to recover it. 00:32:53.840 [2024-11-19 11:00:32.731124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.731155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.731398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.731431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.731790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.731821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.732179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.732211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.732524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.732556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.732918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.732948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.733306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.733338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.733694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.733724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.733968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.733999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.734350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.734385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.734634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.734664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.734910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.734940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.735285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.735318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.735696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.735726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.736137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.736178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.736432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.736463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.736822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.736853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.737234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.737266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.737664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.737696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.738060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.738091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.738464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.738496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.738844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.738878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.739237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.739270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.739590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.739621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.739960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.739990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.740344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.740377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.740734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.841 [2024-11-19 11:00:32.740765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.841 qpair failed and we were unable to recover it. 00:32:53.841 [2024-11-19 11:00:32.740999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.741032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.741254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.741285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.741573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.741602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.741951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.741981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.742331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.742363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.742584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.742613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.742835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.742869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.743230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.743262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.743652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.743682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.743927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.743962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.744222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.744255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.744610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.744640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.744855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.744884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.745133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.745186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.745418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.745449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.745784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.745813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.746183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.746215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.746332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.746364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.746714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.746746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.747135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.747177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.747533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.747564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.747970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.748000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.748343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.748374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.748728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.748758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.748999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.749027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.749350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.749381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.842 [2024-11-19 11:00:32.749754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.842 [2024-11-19 11:00:32.749787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.842 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.750009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.750039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.750413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.750447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.750808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.750838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.751195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.751229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.751588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.751618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.751967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.751998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.752386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.752418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.752765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.752795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.753013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.753043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.753427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.753465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.753739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.753769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.754022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.754054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.754425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.754458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.754792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.754822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.755178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.755210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.755560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.755591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.755937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.755969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.756195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.756227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.756561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.756591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.756976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.757009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.757400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.757433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.757783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.757815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.758182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.758214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.758571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.758603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.758840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.758872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.759251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.759282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.759526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.759556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.759928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.759960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.760216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.843 [2024-11-19 11:00:32.760250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.843 qpair failed and we were unable to recover it. 00:32:53.843 [2024-11-19 11:00:32.760590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.760621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.760837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.760869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.761222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.761254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.761653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.761686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.761789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.761818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.762180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.762212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.762609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.762641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.762875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.762907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.763342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.763374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.763729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.763761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.764124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.764155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.764538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.764570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.764930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.764960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.765327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.765358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.765573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.765603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.765820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.765850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.766180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.766212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.766425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.766454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.766798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.766828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.767040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.767069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.767445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.767478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.767842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.767872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.768076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.768106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.768342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.768373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.768741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.768770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.769124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.769155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.769511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.769542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.769772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.769805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.770030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.770062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.844 qpair failed and we were unable to recover it. 00:32:53.844 [2024-11-19 11:00:32.770421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.844 [2024-11-19 11:00:32.770453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.770805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.770837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.771183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.771215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.771581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.771612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.771707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.771737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.772100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.772131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.772505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.772536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.772771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.772801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.773197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.773231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.773465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.773495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.773736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.773765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.773986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.774017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.774362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.774393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.774766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.774798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.775026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.775055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.775385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.775417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.775774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.775805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.776186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.776219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.776479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.776508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.776749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.776786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.777169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.777204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.777558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.845 [2024-11-19 11:00:32.777590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.845 qpair failed and we were unable to recover it. 00:32:53.845 [2024-11-19 11:00:32.777794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.777823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.778196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.778229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.778591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.778623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.778988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.779019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.779413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.779446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.779789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.779818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.780206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.780238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.780624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.780655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.780892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.780923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.781153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.781206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.781463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.781493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.781882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.781913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.782133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.782191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.782547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.782577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.782952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.782983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.783338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.783370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.783749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.783781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.784005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.784036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.784290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.784322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.784704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.784734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.784975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.785007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.785390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.785426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.785653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.785682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.786044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.786075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.786218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.786256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.786351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.786379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.786594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.786624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.787027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.787059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.846 [2024-11-19 11:00:32.787402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.846 [2024-11-19 11:00:32.787434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.846 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.787792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.787823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.788034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.788065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.788280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.788312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.788680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.788711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.789072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.789104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.789468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.789500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.789868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.789900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.790107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.790137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.790512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.790545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.790910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.790942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.791294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.791326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.791544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.791573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.791928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.791958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.792225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.792255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.792497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.792528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.792882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.792913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.793286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.793318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.793540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.793570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.793917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.793948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.794295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.794328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.794694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.794724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.795090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.795121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.795350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.795386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.847 [2024-11-19 11:00:32.795632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.847 [2024-11-19 11:00:32.795662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.847 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.796022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.796051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.796404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.796435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.796787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.796818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.797198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.797230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.797609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.797639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.798006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.798036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.798414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.798448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.798659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.798689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.799046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.799079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.799463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.799497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.799873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.799903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.800236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.800267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.800637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.800670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.801028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.801058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.801273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.801305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.801540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.801577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.801823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.801854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.802071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.802100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.802484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.802516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.802871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.802903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.803111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.803141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.803512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.803545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.803906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.803937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.804153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.804194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.804413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.804445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.804787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.804817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.805197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.805231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.805590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.805620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.805972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.806003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.806388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.806422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.806781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.848 [2024-11-19 11:00:32.806812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.848 qpair failed and we were unable to recover it. 00:32:53.848 [2024-11-19 11:00:32.807177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.807209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.807422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.807454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.807832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.807862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.808083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.808113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.808500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.808534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.808901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.808931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.809299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.809332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.809687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.809721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.810048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.810081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.810293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.810326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.810703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.810735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.811088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.811119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.811494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.811525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.811880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.811910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.812133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.812169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.812455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.812486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.812839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.812870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.813317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.813351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.813689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.813720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.814066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.814097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.814470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.814502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.814863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.814895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.815286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.815318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.815684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.815716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.816086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.816116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.816506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.816537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.816899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.816931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.817025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.817054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.817409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.817441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.817786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.817817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.818182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.818215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.849 [2024-11-19 11:00:32.818562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.849 [2024-11-19 11:00:32.818594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.849 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.818946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.818976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.819337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.819369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.819733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.819764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.820128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.820174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.820379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.820410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.820773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.820806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.820906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.820938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.821301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.821332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.821690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.821721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.821943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.821974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.822329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.822362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.822577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.822606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.822979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.823009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.823351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.823384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.823739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.823770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.824124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.824155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.824366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.824397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.824760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.824790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.825156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.825196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.825454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.825484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.825833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.825863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.826215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.826248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.826481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.826511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.826871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.826901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.827271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.827303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.827539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.827568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.827939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.827970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.828321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.828353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.850 [2024-11-19 11:00:32.828704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.850 [2024-11-19 11:00:32.828735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.850 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.829099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.829130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.829477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.829515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.829860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.829890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.830250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.830283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.830643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.830674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.830900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.830929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.831306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.831338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.831700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.831731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.832084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.832114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.832486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.832518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.832707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.832738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.833155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.833198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.833528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.833558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.833807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.833841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.834055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.834084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.834441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.834473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.834718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.834749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.835094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.835126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.835453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.835485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.835838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.835869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.836235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.836266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.836644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.836675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.837037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.837067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.837441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.837476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.837832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.851 [2024-11-19 11:00:32.837863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.851 qpair failed and we were unable to recover it. 00:32:53.851 [2024-11-19 11:00:32.838225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.838258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.838621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.838651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.839025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.839055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.839412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.839442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.839802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.839834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.840210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.840244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.840493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.840523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.840759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.840789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.841180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.841212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.841588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.841620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.841969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.842001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.842389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.842422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.842768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.842800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.843046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.843076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.843474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.843505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.843854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.843886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.844253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.844284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.844648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.844680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.845039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.845068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.845426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.845457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.845819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.845850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.846220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.846253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.846638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.846669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.846888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.846917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.847120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.847151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.847561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.847591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.847932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.847963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.848318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.848350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.848703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.848734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.849086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.849118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.849479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.849510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.849746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.852 [2024-11-19 11:00:32.849775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.852 qpair failed and we were unable to recover it. 00:32:53.852 [2024-11-19 11:00:32.850033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.850063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.850424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.850457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.850833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.850864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.851216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.851248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.851592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.851623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.851916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.851945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.852153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.852192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.852528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.852557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.852920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.852953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.853330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.853361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.853728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.853757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.854110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.854141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.854513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.854552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.854811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.854844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.855197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.855231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.855585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.855617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.855981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.856011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.856330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.856362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.856722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.856752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.857118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.857148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.857521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.857555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.857949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.857980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.858330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.858364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.858740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.858769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.859012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.859042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.859415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.859445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.859799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.859832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.860187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.860220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.860491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.860525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.860739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.860769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.861010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.861039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.861383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.861414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.853 qpair failed and we were unable to recover it. 00:32:53.853 [2024-11-19 11:00:32.861779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.853 [2024-11-19 11:00:32.861810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.862180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.862214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.862530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.862560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.862907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.862938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.863167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.863200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.863540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.863569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.863812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.863844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.864188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.864234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.864584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.864616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.865019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.865049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.865419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.865452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.865795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.865825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.866180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.866212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.866441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.866472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.866842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.866874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.867221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.867252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.867599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.867628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.868007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.868036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.868259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.868290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.868655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.868686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.869055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.869086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.869455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.869487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.869704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.869734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.870056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.870088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.870456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.870488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.870836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.870868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.871233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.871264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.871588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.871617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.871980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.872010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.854 [2024-11-19 11:00:32.872390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.854 [2024-11-19 11:00:32.872424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.854 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.872766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.872796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.873015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.873044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.873366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.873398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.873776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.873807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.874173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.874214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.874560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.874590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.874800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.874829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.875188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.875220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.875427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.875458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.875819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.875851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.876219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.876251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.876617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.876649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.876979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.877009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.877354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.877387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.877752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.877784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.878134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.878178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.878503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.878534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.878894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.878924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.879282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.879314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.879673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.879704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.880072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.880103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.880468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.880500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.880869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.880898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.881250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.881283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.881657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.881688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.882039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.882071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.882437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.882470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.882838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.882869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.883220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.883251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.855 [2024-11-19 11:00:32.883636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.855 [2024-11-19 11:00:32.883667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.855 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.884077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.884108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.884489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.884521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.884732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.884762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.885121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.885153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.885554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.885585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.885938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.885969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.886326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.886359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.886723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.886754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.887109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.887141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.887514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.887545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.887901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.887932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.888297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.888328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.888687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.888717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.889064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.889097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.889470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.889501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.889737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.889772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.890126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.890156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.890587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.890617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.890834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.890863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.891108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.891139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.891512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.891543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.891911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.891942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.892303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.892335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.892692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.892723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.893080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.893110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.893485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.893518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.893869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.893900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.894265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.894298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.894663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.894693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.895042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.895074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.895445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.895476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.895829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.895862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.856 [2024-11-19 11:00:32.896226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.856 [2024-11-19 11:00:32.896261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.856 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.896484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.896514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.896858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.896889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.897246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.897308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.897668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.897700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.898076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.898108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.898357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.898388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.898752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.898784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.899123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.899154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.899395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.899425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.899799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.899837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.900188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.900222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.900624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.900657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.901028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.901060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.901437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.901469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.901827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.901857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.902233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.902264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.902638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.902667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.902994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.903027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.903347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.903380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.903726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.903757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.904015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.904049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.904299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.904330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.904558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.904587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.904992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.905024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.905349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.857 [2024-11-19 11:00:32.905382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.857 qpair failed and we were unable to recover it. 00:32:53.857 [2024-11-19 11:00:32.905724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.905756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.905971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.906002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.906410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.906443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.906793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.906825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.907182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.907217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.907546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.907577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.907922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.907953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.908309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.908341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.908698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.908729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.909079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.909111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.909477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.909509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.909731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.909768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.910134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.910170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.910384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.910414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.910796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.910827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.911198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.911230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.911568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.911597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.911944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.911976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.912346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.912378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.912735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.912767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.913135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.913179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.913408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.913438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.913761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.913794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.914051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.914086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.914328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.914360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.914746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.914779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.915149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.915191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.915528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.915558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.915922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.915953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.916300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.916332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-19 11:00:32.916702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.858 [2024-11-19 11:00:32.916733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.917101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.917132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.917504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.917535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.917750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.917780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.918124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.918154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.918531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.918563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.918791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.918821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.919224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.919255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.919614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.919647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.920005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.920036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.920420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.920453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.920796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.920829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.921201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.921233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.921612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.921644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.922013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.922043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.922420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.922452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.922669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.922701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.923080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.923113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.923476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.923507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.923862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.923894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.924273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.924306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.924515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.924545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.924778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.924813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.925201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.925233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.925634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.925665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.926026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.926058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.926408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.926439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.926807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.926837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.927194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.927225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.927584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.927614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.927867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.927899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-19 11:00:32.928262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.859 [2024-11-19 11:00:32.928296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.928661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.928692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.929056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.929086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.929436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.929469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.929844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.929875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.930245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.930279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.930654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.930684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.931029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.931061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.931275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.931306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.931684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.931716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.932083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.932114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.932488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.932519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.932739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.932771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.933148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.933192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.933569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.933600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.933824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.933855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.934223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.934255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.934608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.934641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.934996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.935032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.935368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.935401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.935757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.935787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.936177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.936209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.936556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.936586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.936943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.936973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.937334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.937367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.937725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.937755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.938145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.938185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.938578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.938610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.938961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.938994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.939342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.939375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.939614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.939645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.939988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.940019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.940242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.860 [2024-11-19 11:00:32.940273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-19 11:00:32.940603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.940635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.940987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.941017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.941398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.941430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.941779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.941811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.942038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.942068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.942453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.942485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.942703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.942735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.943082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.943114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.943379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.943414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.943771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.943801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.944184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.944217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.944544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.944574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.944933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.944971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.945334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.945368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.945724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.945757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.946112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.946144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.946513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.946545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.946910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.946943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.947327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.947359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.947716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.947747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.948108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.948140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.948514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.948545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.948909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.948940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.949295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.949327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.949563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.949593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.949837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.949867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.950220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.950252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.950645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.950676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.951026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.951055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.951282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.951313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.861 qpair failed and we were unable to recover it. 00:32:53.861 [2024-11-19 11:00:32.951691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.861 [2024-11-19 11:00:32.951729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.952076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.952106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.952506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.952540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.952770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.952800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.953217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.953252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.953624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.953654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.954016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.954048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.954391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.954424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.954829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.954861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.955199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.955237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.955494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.955528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.955738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.955769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.956146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.956183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.956565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.956597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.956961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.956994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.957337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.957368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.957582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.957614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.957981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.958012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.958348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.958380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.958754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.958786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.959123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.959152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.959523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.959553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.959927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.959958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.960072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.960103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.960531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.960565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.960921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.960953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.961321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.961353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.961717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.961748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.962100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.962131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.962377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.962407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.962777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.862 [2024-11-19 11:00:32.962809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.862 qpair failed and we were unable to recover it. 00:32:53.862 [2024-11-19 11:00:32.963180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.963212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.963424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.963455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.963804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.963833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.964217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.964248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.964474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.964504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.964743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.964774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.965141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.965182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.965435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.965465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.965820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.965851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.966213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.966244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.966508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.966541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.966936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.966967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.967180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.967212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.967534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.967565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.967937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.967969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.968296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.968327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.968537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.968569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.968708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.968740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.969131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.969171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.969364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.969400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.969793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.969824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.970189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.970224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.970388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.970417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.970798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.970829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.971194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.971227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.971436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.971466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.971816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.863 [2024-11-19 11:00:32.971847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.863 qpair failed and we were unable to recover it. 00:32:53.863 [2024-11-19 11:00:32.972065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.972096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.972419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.972452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.972814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.972848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.973078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.973110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.973491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.973523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.973932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.973964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.974198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.974231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.974585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.974616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.974840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.974870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.975232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.975265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.975627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.975657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.976014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.976045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.976297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.976328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.976582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.976611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.976960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.976990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.977384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.977417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.977785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.977815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.978045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.978075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.978295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.978326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.978707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.978743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.979112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.979142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.979497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.979530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.979932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.979962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.980071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.980100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.980289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.980320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.980641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.980673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.981039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.981071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.981300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.981331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.981689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.981720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.981955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.981988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.864 qpair failed and we were unable to recover it. 00:32:53.864 [2024-11-19 11:00:32.982322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.864 [2024-11-19 11:00:32.982355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.982569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.982598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.983000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.983030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.983411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.983443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.983854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.983883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.984238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.984271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.984650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.984683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.984905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.984935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.985340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.985372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.985718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.985748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.986115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.986145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.986418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.986448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.986596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.986629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.986844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.986873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.987279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.987312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.987644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.987672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.988035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.988072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.988320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.988352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.988713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.988745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.989094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.989125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.989396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.989427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.989791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.989824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.990196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.990227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.990578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.990610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.990938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.990970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.991074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.991105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.991493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.991524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.991883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.991916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.992302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.992335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.992697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.992730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.992953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.865 [2024-11-19 11:00:32.992985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.865 qpair failed and we were unable to recover it. 00:32:53.865 [2024-11-19 11:00:32.993337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.993371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:32.993781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.993810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:32.994191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.994224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:32.994583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.994614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:32.994827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.994857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:32.994971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.995003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:32.995395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.995426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:32.995683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.995715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:32.995933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.995964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:32.996370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.996401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:32.996616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.996646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:32.996969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.996999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:32.997332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.997363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:32.997738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.997768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:32.998131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.998172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:32.998409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.998438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:32.998679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.998708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:32.999216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.999257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:32.999496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.999532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:32.999785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:32.999816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:33.000112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:33.000142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:33.000505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:33.000537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:33.000810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:33.000842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:33.001221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:33.001255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:33.001613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:33.001643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:33.002002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:33.002032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:33.002285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:33.002317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:33.002700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:33.002732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:33.002966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:33.002997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:33.003278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:33.003312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:33.003660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.866 [2024-11-19 11:00:33.003692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.866 qpair failed and we were unable to recover it. 00:32:53.866 [2024-11-19 11:00:33.003919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.003950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.004343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.004375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.004744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.004776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.005129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.005170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.005555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.005586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.005798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.005828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.006096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.006129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.006542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.006573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.006937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.006971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.007220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.007253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.007638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.007669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.008040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.008073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.008431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.008465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.008825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.008858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.009277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.009309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.009667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.009696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.010181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.010215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.010341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.010374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.010823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.010963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f840c000b90 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.011520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.011629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f840c000b90 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.012065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.012104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f840c000b90 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.012605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.012714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f840c000b90 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.013109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.013145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.013388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.013419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.013644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.013675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:53.867 [2024-11-19 11:00:33.014039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.867 [2024-11-19 11:00:33.014070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:53.867 qpair failed and we were unable to recover it. 00:32:54.216 [2024-11-19 11:00:33.014398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.014431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.014795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.014827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.015191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.015225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.015450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.015480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.015862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.015891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.016241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.016273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.016696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.016727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.017086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.017118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.017501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.017535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.017891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.017925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.018287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.018321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.018558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.018589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.018963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.018994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.019251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.019282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.019647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.019677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.020030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.020062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.020316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.020347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.020710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.020740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.020966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.020996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.021257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.021288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.021652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.021683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.022059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.022090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.022442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.022472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.022872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.022909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.023184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.023216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.023590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.023621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.023987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.024018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.024262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.024294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.024567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.024598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.024856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.024886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.025182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.025214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.025533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.025563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.025776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.025805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.026179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.026211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.026476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.026509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.026859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.026889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.027137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.217 [2024-11-19 11:00:33.027179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.217 qpair failed and we were unable to recover it. 00:32:54.217 [2024-11-19 11:00:33.027576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.027607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.027792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.027822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.028191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.028224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.028636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.028668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.028931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.028962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.029279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.029311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.029586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.029616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.030024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.030055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.030397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.030430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.030777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.030806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.031019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.031048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.031280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.031313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.031558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.031588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.031803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.031838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.032203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.032236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.032579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.032610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.032826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.032857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.033218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.033250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.033621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.033651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.033887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.033917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.034279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.034312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.034698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.034728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.035096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.035128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.035522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.035555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.035959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.035989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.036337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.036371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.036603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.036634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.036889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.036919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.037146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.037187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.037555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.037585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.037938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.037969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.038351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.038384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.038643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.038674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.039043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.039077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.039464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.039492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.039728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.039762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.039998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.040030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.040413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.218 [2024-11-19 11:00:33.040448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.218 qpair failed and we were unable to recover it. 00:32:54.218 [2024-11-19 11:00:33.040668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.040699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.040952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.040982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.041317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.041349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.041571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.041602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.041959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.041990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.042252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.042284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.042679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.042711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.043079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.043112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.043491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.043523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.043864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.043896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.044239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.044271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.044538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.044570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.044919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.044950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.045319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.045349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.045717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.045748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.046089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.046120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.046401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.046439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.046658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.046689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.046933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.046964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.047229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.047261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.047646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.047679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.048021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.048053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.048391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.048425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.048754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.048786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.049132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.049175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.049569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.049600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.049974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.050005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.050367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.050399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.050762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.050792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.051157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.051198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.051632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.051663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.051871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.051900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.052124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.052168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.052388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.052420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.052805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.052836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.053185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.053219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.053502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.053534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.053761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.053790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.219 qpair failed and we were unable to recover it. 00:32:54.219 [2024-11-19 11:00:33.054070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.219 [2024-11-19 11:00:33.054101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.054354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.054387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.054624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.054655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.055029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.055060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.055321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.055355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.055715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.055753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.056102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.056133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.056555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.056587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.056943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.056975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.057237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.057269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.057618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.057648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.057867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.057898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.058239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.058272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.058594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.058625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.058977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.059008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.059384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.059415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.059622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.059652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.060007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.060037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.060417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.060448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.060691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.060722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.060970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.061001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.061397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.061430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.061787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.061819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.062180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.062211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.062642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.062671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.063018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.063050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.063316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.063350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.063731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.063761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.064124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.064155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.064503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.064534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.064883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.064916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.065209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.065244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.065585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.065620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.065974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.066004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.066337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.066370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.066530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.066565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.066952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.066982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.067373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.067406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.220 [2024-11-19 11:00:33.067755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.220 [2024-11-19 11:00:33.067787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.220 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.068140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.068181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.068449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.068480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.068835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.068866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.069211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.069243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.069608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.069639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.069996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.070029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.070293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.070326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.070682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.070713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.071079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.071112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.071504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.071536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.071869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.071898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.072249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.072282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.072644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.072678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.072910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.072945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.073170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.073203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.073603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.073634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.073988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.074021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.074386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.074419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.074797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.074829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.075188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.075220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.075579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.075616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.075969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.076001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.076337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.076368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.076712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.076745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.077098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.077131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.077527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.077558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.077912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.077944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.078289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.078321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.078688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.078721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.079087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.079119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.079525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.221 [2024-11-19 11:00:33.079558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.221 qpair failed and we were unable to recover it. 00:32:54.221 [2024-11-19 11:00:33.079917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.079948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.080198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.080233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.080588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.080621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.081001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.081032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.081265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.081301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.081659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.081690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.081914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.081944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.082287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.082318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.082691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.082722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.083063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.083094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.083471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.083503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.083720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.083750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.084100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.084131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.084511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.084542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.084794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.084824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.084987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.085021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.085350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.085382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.085747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.085779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.086127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.086191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.086569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.086601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.086959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.086992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.087202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.087236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.087496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.087530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.087896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.087926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.088287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.088323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.088702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.088734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.089105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.089137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.089512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.089543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.089767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.089801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.090143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.090184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.090592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.090624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.090882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.090913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.091128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.091168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.091539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.091571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.091926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.091957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.092312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.092345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.092579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.092615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.092955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.222 [2024-11-19 11:00:33.092985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.222 qpair failed and we were unable to recover it. 00:32:54.222 [2024-11-19 11:00:33.093340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.093372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.093719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.093750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.094111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.094142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.094524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.094555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.094853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.094883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.095255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.095287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.095651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.095683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.096038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.096068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.096437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.096469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.096830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.096862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.097208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.097241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.097597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.097628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.097864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.097898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.098000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.098031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.098410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.098444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.098695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.098724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.098939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.098970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.099374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.099408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.099748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.099778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.100132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.100181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.100570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.100600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.100956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.100986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.101212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.101245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.101618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.101648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.101882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.101911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.102253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.102287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.102509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.102540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.102769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.102800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.103150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.103189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.103551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.103580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.103933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.103965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.104319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.104351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.104712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.104744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.105093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.105127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.105519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.105552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.105902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.105935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.106265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.106297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.106546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.223 [2024-11-19 11:00:33.106577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.223 qpair failed and we were unable to recover it. 00:32:54.223 [2024-11-19 11:00:33.106933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.106967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.107315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.107348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.107614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.107649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.107858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.107888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.108206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.108237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.108606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.108637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.108995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.109028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.109403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.109436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.109783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.109822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.110175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.110207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.110441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.110471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.110831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.110861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.111086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.111118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.111358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.111393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.111637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.111667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.111989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.112020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.112229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.112261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.112626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.112657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.113010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.113043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.113420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.113452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.113796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.113829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.114168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.114202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.114466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.114499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.114849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.114881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.115261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.115294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.115520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.115550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.115922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.115952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.116311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.116345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.116574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.116604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.116810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.116841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.117107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.117141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.117365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.117396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.117780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.117811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.118043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.118072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.118443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.118474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.118694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.118724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.119076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.119107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.119468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.224 [2024-11-19 11:00:33.119501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.224 qpair failed and we were unable to recover it. 00:32:54.224 [2024-11-19 11:00:33.119859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.119890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.120259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.120294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.120665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.120696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.121063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.121094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.121461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.121493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.121735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.121766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.121993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.122027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.122412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.122443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.122799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.122830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.123217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.123249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.123665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.123697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.123952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.123983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.124336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.124370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.124727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.124756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.124852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.124881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15460c0 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.125483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.125591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.126030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.126070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.126439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.126477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.126813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.126846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.127115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.127147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.127642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.127745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.128031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.128071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.128500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.128605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.128891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.128932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.129296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.129331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.129683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.129715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.129962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.129994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.130260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.130293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.130507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.130537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.130896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.130930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.131183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.131216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.131579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.131611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.131973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.132004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.132376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.132410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.132780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.132812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.133038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.133070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.133404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.133437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.133782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.225 [2024-11-19 11:00:33.133814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.225 qpair failed and we were unable to recover it. 00:32:54.225 [2024-11-19 11:00:33.134143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.134191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.134565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.134597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.134952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.134984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.135241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.135275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.135637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.135670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.136021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.136051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.136322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.136355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.136749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.136781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.137129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.137170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.137546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.137577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.137957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.137988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.138400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.138434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.138589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.138627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.138877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.138914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.139262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.139296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.139512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.139544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.139915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.139947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.140203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.140237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.140639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.140671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.141095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.141127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.141346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.141378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.141631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.141662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.142011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.142044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.142256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.142290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.142676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.142705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.142946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.142976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.143341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.143374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.143735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.143766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.144138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.144184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.144306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.144334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.144696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.144725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.145081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.145111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.145495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.226 [2024-11-19 11:00:33.145528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.226 qpair failed and we were unable to recover it. 00:32:54.226 [2024-11-19 11:00:33.145861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.145891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.146284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.146315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.146532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.146563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.146925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.146955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.147221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.147253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.147594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.147625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.147981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.148014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.148288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.148323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.148669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.148701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.148972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.149003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.149329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.149360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.149735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.149767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.149983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.150015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.150403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.150435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.150768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.150802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.151065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.151097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.151455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.151489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.151893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.151923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.152271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.152306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.152684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.152715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.153082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.153119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.153380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.153411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.153780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.153812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.154003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.154033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.154419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.154454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.154820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.154852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.155084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.155116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.155479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.155511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.155870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.155903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.156249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.156282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.156543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.156573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.156703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.156732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.157105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.157136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.157305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.157337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:54.227 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:32:54.227 [2024-11-19 11:00:33.157729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.157762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 [2024-11-19 11:00:33.157968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.157999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.227 qpair failed and we were unable to recover it. 00:32:54.227 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:54.227 [2024-11-19 11:00:33.158317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.227 [2024-11-19 11:00:33.158350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b9 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:54.227 0 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.158580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:54.228 [2024-11-19 11:00:33.158613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.158823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.158856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.159119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.159150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.159497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.159527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.159892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.159924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.160280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.160314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.160671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.160702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.161069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.161103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.161458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.161492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.161744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.161774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.162122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.162154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.162537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.162570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.162929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.162960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.163348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.163380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.163753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.163787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.164004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.164035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.164442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.164477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.164828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.164861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.165082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.165112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.165391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.165422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.165766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.165798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.166020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.166051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.166388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.166420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.166639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.166669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.167013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.167044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.167394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.167426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.167675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.167709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.167941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.167973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.168346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.168380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.168727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.168761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.169118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.169151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.169511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.169544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.169916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.169949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.170313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.170345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.170719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.170758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.171105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.171140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.228 [2024-11-19 11:00:33.171378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.228 [2024-11-19 11:00:33.171410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.228 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.171693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.171726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.172046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.172078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.172417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.172450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.172669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.172699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.172922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.172958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.173319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.173350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.173697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.173731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.174122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.174153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.174435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.174466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.174812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.174843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.175189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.175223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.175632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.175665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.176014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.176046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.176308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.176341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.176688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.176722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.177037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.177068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.177402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.177436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.177676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.177706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.177919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.177951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.178193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.178227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.178506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.178537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.178860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.178892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.179234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.179269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.179476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.179506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.179620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.179667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.180020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.180052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.180150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.180192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.180636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.180668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.181048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.181083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.181416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.181450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.181801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.181833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.182077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.182110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.182513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.182547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.182801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.182833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.183189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.183223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.183587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.183618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.183987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.184019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.229 qpair failed and we were unable to recover it. 00:32:54.229 [2024-11-19 11:00:33.184241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.229 [2024-11-19 11:00:33.184272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.184654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.184687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.185078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.185109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.185462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.185497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.185855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.185888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.186240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.186275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.186661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.186694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.187073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.187104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.187465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.187497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.187855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.187887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.188246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.188281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.188498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.188528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.188889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.188922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.189280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.189311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.189677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.189710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.189811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.189840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.190201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.190237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.190388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.190421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.190785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.190818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.191059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.191095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.191497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.191530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.191876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.191909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.192286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.192319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.192660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.192694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.192907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.192939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.193304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.193337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.193699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.193731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.194094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.194132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.194421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.194454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.194833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.194864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.195215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.195249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.195631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.195663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.196023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.230 [2024-11-19 11:00:33.196057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.230 qpair failed and we were unable to recover it. 00:32:54.230 [2024-11-19 11:00:33.196431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.196463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.196708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.196739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.196961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.196992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.197392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.197425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.197776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.197810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.198180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.198212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.198532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.198564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.198908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.198940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.199169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.199203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.199576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.199608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.199968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.199998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.200352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.200390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.200613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.200645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.201000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.201033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.201241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.201273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:54.231 [2024-11-19 11:00:33.201649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.201684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:54.231 [2024-11-19 11:00:33.202049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.202083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.231 [2024-11-19 11:00:33.202339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.202374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:54.231 [2024-11-19 11:00:33.202760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.202795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.203051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.203085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.203451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.203485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.203849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.203882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.204239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.204272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.204649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.204681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.205036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.205066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.205413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.205445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.205690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.205721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.206068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.206101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.206464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.206497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.206838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.206870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.231 [2024-11-19 11:00:33.207235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.231 [2024-11-19 11:00:33.207268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.231 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.207652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.207685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.208045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.208085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.208429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.208463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.208821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.208852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.209207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.209261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.209363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.209391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.209721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.209749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.210104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.210136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.210376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.210408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.210760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.210791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.211029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.211059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.211429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.211461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.211807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.211838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.212200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.212233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.212437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.212469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.212797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.212830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.213189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.213224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.213622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.213653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.213898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.213929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.214221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.214252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.214590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.214622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.214842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.214875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.215255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.215289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.215659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.215690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.216044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.216075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.216430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.216463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.216842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.216873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.217081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.217114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.217377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.217411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.217660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.217690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.218043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.218075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.218447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.218480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.218694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.218725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.219000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.219033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.219290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.219323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.219726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.232 [2024-11-19 11:00:33.219757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.232 qpair failed and we were unable to recover it. 00:32:54.232 [2024-11-19 11:00:33.220130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.220169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.220402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.220436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.220688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.220722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.220969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.221003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.221250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.221283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.221652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.221689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.222020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.222054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.222407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.222440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.222761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.222793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.223142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.223199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.223481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.223511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.223851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.223883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.224246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.224281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.224636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.224667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.224906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.224938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.225182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.225216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.225571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.225602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.225830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.225862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.226071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.226103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.226436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.226470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.226827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.226860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.227219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.227254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.227464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.227495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.227860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.227892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.228246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.228281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.228635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.228667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.229049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.229080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.229408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.229442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.229780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.229812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.230198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.230232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.230575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.230605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.230962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.230993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.231336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.231368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.231594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.231624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.233 qpair failed and we were unable to recover it. 00:32:54.233 [2024-11-19 11:00:33.231910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.233 [2024-11-19 11:00:33.231940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.232150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.232190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.232553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.232583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.232948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.232979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.233201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.233234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.233616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.233647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.234011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.234041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.234397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.234432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.234818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.234851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.235054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.235086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.235346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.235378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.235585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.235621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.235830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.235860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.236098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.236130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.236243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.236275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.236665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.236695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.237060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.237091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.237460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.237493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.237857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.237888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.238247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.238278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.238576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.238606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.238975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.239007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.239130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.239171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.239544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.239575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.239936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.239967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.240336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.240368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.240595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.240626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.240982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.241012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.241260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.241291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 Malloc0 00:32:54.234 [2024-11-19 11:00:33.241734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.241765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.241991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.242020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.234 [2024-11-19 11:00:33.242412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.242444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:54.234 [2024-11-19 11:00:33.242824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.242855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.243003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.243035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:54.234 [2024-11-19 11:00:33.243412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.243444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.243803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.243835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.234 qpair failed and we were unable to recover it. 00:32:54.234 [2024-11-19 11:00:33.244214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.234 [2024-11-19 11:00:33.244254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.244662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.244694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.245068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.245099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.245236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.245265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.245605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.245635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.245937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.245967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.246212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.246244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.246649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.246681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.247040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.247070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.247301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.247333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.247570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.247600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.247850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.247880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.248250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.248284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.248498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.248528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.248888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.248895] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:54.235 [2024-11-19 11:00:33.248921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.249289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.249324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.249698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.249730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.250074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.250105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.250563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.250595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.250943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.250975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.251207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.251239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.251447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.251477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.251698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.251728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.252084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.252116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.252510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.252543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.252770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.252801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.253173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.253205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.253335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.253363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.253708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.253739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.254095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.254126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.254526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.254559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.254794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.254824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.235 [2024-11-19 11:00:33.255170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.235 [2024-11-19 11:00:33.255203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.235 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.255447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.255477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.255805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.255836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.256183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.256216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.256537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.256568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.256699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.256730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.257090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.257123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.257474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.257507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.257883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.257914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.236 [2024-11-19 11:00:33.258152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.258192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:54.236 [2024-11-19 11:00:33.258445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.258475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.236 [2024-11-19 11:00:33.258847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.258877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:54.236 [2024-11-19 11:00:33.259233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.259267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.259622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.259654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.259907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.259936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.260286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.260319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.260708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.260740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.261095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.261126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.261514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.261547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.261766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.261804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.262199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.262232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.262455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.262484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.262832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.262864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.263229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.263261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.263623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.263654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.264024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.264057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.264426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.264458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.264819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.264850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.265203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.265233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.265622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.265653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.266013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.266043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.266283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.266314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.266688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.266719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.267081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.267111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.267410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.267440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.267762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.267794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.236 qpair failed and we were unable to recover it. 00:32:54.236 [2024-11-19 11:00:33.268174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.236 [2024-11-19 11:00:33.268207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.268594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.268625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.268969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.269001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.269289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.269322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.269447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.269476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.269801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.269830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.237 [2024-11-19 11:00:33.270211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.270244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:54.237 [2024-11-19 11:00:33.270508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.270539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.237 [2024-11-19 11:00:33.270909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.270939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:54.237 [2024-11-19 11:00:33.271286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.271318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.271706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.271737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.272100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.272133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.272477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.272509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.272865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.272897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.273256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.273288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.273530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.273560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.273811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.273844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.274077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.274105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.274502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.274535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.274976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.275009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.275117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.275146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.275590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.275621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.275979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.276011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.276398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.276429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.276787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.276818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.277059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.277095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.277337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.277368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.277625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.277655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.278024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.278057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.278287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.278322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.278545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.278575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.279020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.279052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.279338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.279371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.279742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.279771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.280028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.280058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.280329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.280361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.237 qpair failed and we were unable to recover it. 00:32:54.237 [2024-11-19 11:00:33.280742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.237 [2024-11-19 11:00:33.280774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.281146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.281187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.281549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.281579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.281691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.281720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.281847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.281880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.238 [2024-11-19 11:00:33.282101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.282133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:54.238 [2024-11-19 11:00:33.282563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.282594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.238 [2024-11-19 11:00:33.282965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.282996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:54.238 [2024-11-19 11:00:33.283349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.283380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.283758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.283789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.284015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.284052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.284421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.284452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.284809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.284840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.285206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.285238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.285628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.285658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.286000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.286031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.286382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.286414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.286629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.286660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.286991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.287022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.287285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.287316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.287679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.287710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.288078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.288108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.288472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.288504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.288860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.238 [2024-11-19 11:00:33.288893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8410000b90 with addr=10.0.0.2, port=4420 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.289283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:54.238 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.238 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:54.238 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.238 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:54.238 [2024-11-19 11:00:33.300240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.238 [2024-11-19 11:00:33.300392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.238 [2024-11-19 11:00:33.300437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.238 [2024-11-19 11:00:33.300458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.238 [2024-11-19 11:00:33.300477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.238 [2024-11-19 11:00:33.300528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.238 11:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1214117 00:32:54.238 [2024-11-19 11:00:33.309930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.238 [2024-11-19 11:00:33.310013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.238 [2024-11-19 11:00:33.310042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.238 [2024-11-19 11:00:33.310059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.238 [2024-11-19 11:00:33.310071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.238 [2024-11-19 11:00:33.310105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.319930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.238 [2024-11-19 11:00:33.319999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.238 [2024-11-19 11:00:33.320018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.238 [2024-11-19 11:00:33.320028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.238 [2024-11-19 11:00:33.320037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.238 [2024-11-19 11:00:33.320059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.238 qpair failed and we were unable to recover it. 00:32:54.238 [2024-11-19 11:00:33.330030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.238 [2024-11-19 11:00:33.330109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.239 [2024-11-19 11:00:33.330131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.239 [2024-11-19 11:00:33.330140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.239 [2024-11-19 11:00:33.330147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.239 [2024-11-19 11:00:33.330172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.239 qpair failed and we were unable to recover it. 00:32:54.239 [2024-11-19 11:00:33.340026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.239 [2024-11-19 11:00:33.340103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.239 [2024-11-19 11:00:33.340120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.239 [2024-11-19 11:00:33.340129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.239 [2024-11-19 11:00:33.340136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.239 [2024-11-19 11:00:33.340153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.239 qpair failed and we were unable to recover it. 00:32:54.239 [2024-11-19 11:00:33.349990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.239 [2024-11-19 11:00:33.350049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.239 [2024-11-19 11:00:33.350065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.239 [2024-11-19 11:00:33.350073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.239 [2024-11-19 11:00:33.350080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.239 [2024-11-19 11:00:33.350097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.239 qpair failed and we were unable to recover it. 00:32:54.239 [2024-11-19 11:00:33.360005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.239 [2024-11-19 11:00:33.360075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.239 [2024-11-19 11:00:33.360091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.239 [2024-11-19 11:00:33.360099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.239 [2024-11-19 11:00:33.360106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.239 [2024-11-19 11:00:33.360124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.239 qpair failed and we were unable to recover it. 00:32:54.239 [2024-11-19 11:00:33.370050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.239 [2024-11-19 11:00:33.370121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.239 [2024-11-19 11:00:33.370138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.239 [2024-11-19 11:00:33.370151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.239 [2024-11-19 11:00:33.370164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.239 [2024-11-19 11:00:33.370183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.239 qpair failed and we were unable to recover it. 00:32:54.239 [2024-11-19 11:00:33.380166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.239 [2024-11-19 11:00:33.380240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.239 [2024-11-19 11:00:33.380256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.239 [2024-11-19 11:00:33.380265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.239 [2024-11-19 11:00:33.380272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.239 [2024-11-19 11:00:33.380290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.239 qpair failed and we were unable to recover it. 00:32:54.535 [2024-11-19 11:00:33.390145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.535 [2024-11-19 11:00:33.390221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.535 [2024-11-19 11:00:33.390238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.535 [2024-11-19 11:00:33.390247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.535 [2024-11-19 11:00:33.390254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.535 [2024-11-19 11:00:33.390272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.535 qpair failed and we were unable to recover it. 00:32:54.535 [2024-11-19 11:00:33.400176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.535 [2024-11-19 11:00:33.400237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.535 [2024-11-19 11:00:33.400254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.535 [2024-11-19 11:00:33.400262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.535 [2024-11-19 11:00:33.400269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.535 [2024-11-19 11:00:33.400287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.535 qpair failed and we were unable to recover it. 00:32:54.535 [2024-11-19 11:00:33.410127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.535 [2024-11-19 11:00:33.410242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.535 [2024-11-19 11:00:33.410260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.535 [2024-11-19 11:00:33.410268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.535 [2024-11-19 11:00:33.410275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.535 [2024-11-19 11:00:33.410299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.535 qpair failed and we were unable to recover it. 00:32:54.535 [2024-11-19 11:00:33.420232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.535 [2024-11-19 11:00:33.420325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.535 [2024-11-19 11:00:33.420343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.535 [2024-11-19 11:00:33.420352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.535 [2024-11-19 11:00:33.420363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.535 [2024-11-19 11:00:33.420382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.535 qpair failed and we were unable to recover it. 00:32:54.535 [2024-11-19 11:00:33.430246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.535 [2024-11-19 11:00:33.430313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.535 [2024-11-19 11:00:33.430331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.535 [2024-11-19 11:00:33.430339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.535 [2024-11-19 11:00:33.430346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.535 [2024-11-19 11:00:33.430364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.536 qpair failed and we were unable to recover it. 00:32:54.536 [2024-11-19 11:00:33.440386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.536 [2024-11-19 11:00:33.440481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.536 [2024-11-19 11:00:33.440498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.536 [2024-11-19 11:00:33.440506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.536 [2024-11-19 11:00:33.440514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.536 [2024-11-19 11:00:33.440531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.536 qpair failed and we were unable to recover it. 00:32:54.536 [2024-11-19 11:00:33.450326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.536 [2024-11-19 11:00:33.450399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.536 [2024-11-19 11:00:33.450415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.536 [2024-11-19 11:00:33.450424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.536 [2024-11-19 11:00:33.450431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.536 [2024-11-19 11:00:33.450449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.536 qpair failed and we were unable to recover it. 00:32:54.536 [2024-11-19 11:00:33.460446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.536 [2024-11-19 11:00:33.460527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.536 [2024-11-19 11:00:33.460544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.536 [2024-11-19 11:00:33.460552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.536 [2024-11-19 11:00:33.460559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.536 [2024-11-19 11:00:33.460576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.536 qpair failed and we were unable to recover it. 00:32:54.536 [2024-11-19 11:00:33.470349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.536 [2024-11-19 11:00:33.470409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.536 [2024-11-19 11:00:33.470425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.536 [2024-11-19 11:00:33.470433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.536 [2024-11-19 11:00:33.470441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.536 [2024-11-19 11:00:33.470458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.536 qpair failed and we were unable to recover it. 00:32:54.536 [2024-11-19 11:00:33.480325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.536 [2024-11-19 11:00:33.480387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.536 [2024-11-19 11:00:33.480403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.536 [2024-11-19 11:00:33.480412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.536 [2024-11-19 11:00:33.480420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.536 [2024-11-19 11:00:33.480437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.536 qpair failed and we were unable to recover it. 00:32:54.536 [2024-11-19 11:00:33.490401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.536 [2024-11-19 11:00:33.490489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.536 [2024-11-19 11:00:33.490506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.536 [2024-11-19 11:00:33.490514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.536 [2024-11-19 11:00:33.490524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.536 [2024-11-19 11:00:33.490541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.536 qpair failed and we were unable to recover it. 00:32:54.536 [2024-11-19 11:00:33.500461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.536 [2024-11-19 11:00:33.500550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.536 [2024-11-19 11:00:33.500566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.536 [2024-11-19 11:00:33.500579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.536 [2024-11-19 11:00:33.500588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.536 [2024-11-19 11:00:33.500606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.536 qpair failed and we were unable to recover it. 00:32:54.536 [2024-11-19 11:00:33.510474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.536 [2024-11-19 11:00:33.510590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.536 [2024-11-19 11:00:33.510608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.536 [2024-11-19 11:00:33.510616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.536 [2024-11-19 11:00:33.510624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.536 [2024-11-19 11:00:33.510642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.536 qpair failed and we were unable to recover it. 00:32:54.536 [2024-11-19 11:00:33.520492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.536 [2024-11-19 11:00:33.520563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.536 [2024-11-19 11:00:33.520580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.536 [2024-11-19 11:00:33.520588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.536 [2024-11-19 11:00:33.520596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.536 [2024-11-19 11:00:33.520614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.536 qpair failed and we were unable to recover it. 00:32:54.536 [2024-11-19 11:00:33.530520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.536 [2024-11-19 11:00:33.530594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.536 [2024-11-19 11:00:33.530611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.536 [2024-11-19 11:00:33.530619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.536 [2024-11-19 11:00:33.530626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.536 [2024-11-19 11:00:33.530643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.536 qpair failed and we were unable to recover it. 00:32:54.536 [2024-11-19 11:00:33.540562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.536 [2024-11-19 11:00:33.540639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.536 [2024-11-19 11:00:33.540655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.536 [2024-11-19 11:00:33.540663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.536 [2024-11-19 11:00:33.540670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.536 [2024-11-19 11:00:33.540693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.536 qpair failed and we were unable to recover it. 00:32:54.536 [2024-11-19 11:00:33.550552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.536 [2024-11-19 11:00:33.550616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.536 [2024-11-19 11:00:33.550632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.536 [2024-11-19 11:00:33.550639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.536 [2024-11-19 11:00:33.550646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.536 [2024-11-19 11:00:33.550663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.536 qpair failed and we were unable to recover it. 00:32:54.536 [2024-11-19 11:00:33.560596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.536 [2024-11-19 11:00:33.560657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.536 [2024-11-19 11:00:33.560672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.536 [2024-11-19 11:00:33.560680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.536 [2024-11-19 11:00:33.560687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.536 [2024-11-19 11:00:33.560704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.537 qpair failed and we were unable to recover it. 00:32:54.537 [2024-11-19 11:00:33.570649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.537 [2024-11-19 11:00:33.570716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.537 [2024-11-19 11:00:33.570732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.537 [2024-11-19 11:00:33.570740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.537 [2024-11-19 11:00:33.570747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.537 [2024-11-19 11:00:33.570763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.537 qpair failed and we were unable to recover it. 00:32:54.537 [2024-11-19 11:00:33.580654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.537 [2024-11-19 11:00:33.580745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.537 [2024-11-19 11:00:33.580761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.537 [2024-11-19 11:00:33.580769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.537 [2024-11-19 11:00:33.580777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.537 [2024-11-19 11:00:33.580794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.537 qpair failed and we were unable to recover it. 00:32:54.537 [2024-11-19 11:00:33.590590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.537 [2024-11-19 11:00:33.590652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.537 [2024-11-19 11:00:33.590669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.537 [2024-11-19 11:00:33.590677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.537 [2024-11-19 11:00:33.590684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.537 [2024-11-19 11:00:33.590701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.537 qpair failed and we were unable to recover it. 00:32:54.537 [2024-11-19 11:00:33.600727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.537 [2024-11-19 11:00:33.600797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.537 [2024-11-19 11:00:33.600815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.537 [2024-11-19 11:00:33.600823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.537 [2024-11-19 11:00:33.600830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.537 [2024-11-19 11:00:33.600848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.537 qpair failed and we were unable to recover it. 00:32:54.537 [2024-11-19 11:00:33.610731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.537 [2024-11-19 11:00:33.610803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.537 [2024-11-19 11:00:33.610828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.537 [2024-11-19 11:00:33.610837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.537 [2024-11-19 11:00:33.610844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.537 [2024-11-19 11:00:33.610865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.537 qpair failed and we were unable to recover it. 00:32:54.537 [2024-11-19 11:00:33.620800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.537 [2024-11-19 11:00:33.620874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.537 [2024-11-19 11:00:33.620908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.537 [2024-11-19 11:00:33.620918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.537 [2024-11-19 11:00:33.620928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.537 [2024-11-19 11:00:33.620952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.537 qpair failed and we were unable to recover it. 00:32:54.537 [2024-11-19 11:00:33.630833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.537 [2024-11-19 11:00:33.630901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.537 [2024-11-19 11:00:33.630942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.537 [2024-11-19 11:00:33.630954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.537 [2024-11-19 11:00:33.630963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.537 [2024-11-19 11:00:33.630988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.537 qpair failed and we were unable to recover it. 00:32:54.537 [2024-11-19 11:00:33.640739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.537 [2024-11-19 11:00:33.640804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.537 [2024-11-19 11:00:33.640823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.537 [2024-11-19 11:00:33.640832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.537 [2024-11-19 11:00:33.640839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.537 [2024-11-19 11:00:33.640858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.537 qpair failed and we were unable to recover it. 00:32:54.537 [2024-11-19 11:00:33.650875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.537 [2024-11-19 11:00:33.650954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.537 [2024-11-19 11:00:33.650971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.537 [2024-11-19 11:00:33.650980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.537 [2024-11-19 11:00:33.650987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.537 [2024-11-19 11:00:33.651005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.537 qpair failed and we were unable to recover it. 00:32:54.537 [2024-11-19 11:00:33.660950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.537 [2024-11-19 11:00:33.661027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.537 [2024-11-19 11:00:33.661062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.537 [2024-11-19 11:00:33.661074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.537 [2024-11-19 11:00:33.661082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.537 [2024-11-19 11:00:33.661107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.537 qpair failed and we were unable to recover it. 00:32:54.537 [2024-11-19 11:00:33.670939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.537 [2024-11-19 11:00:33.671009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.537 [2024-11-19 11:00:33.671029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.537 [2024-11-19 11:00:33.671037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.537 [2024-11-19 11:00:33.671053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.537 [2024-11-19 11:00:33.671073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.537 qpair failed and we were unable to recover it. 00:32:54.537 [2024-11-19 11:00:33.680970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.537 [2024-11-19 11:00:33.681038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.537 [2024-11-19 11:00:33.681056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.537 [2024-11-19 11:00:33.681064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.537 [2024-11-19 11:00:33.681071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.538 [2024-11-19 11:00:33.681090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.538 qpair failed and we were unable to recover it. 00:32:54.538 [2024-11-19 11:00:33.691000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.538 [2024-11-19 11:00:33.691073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.538 [2024-11-19 11:00:33.691091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.538 [2024-11-19 11:00:33.691100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.538 [2024-11-19 11:00:33.691107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.538 [2024-11-19 11:00:33.691126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.538 qpair failed and we were unable to recover it. 00:32:54.538 [2024-11-19 11:00:33.701053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.538 [2024-11-19 11:00:33.701133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.538 [2024-11-19 11:00:33.701150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.538 [2024-11-19 11:00:33.701162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.538 [2024-11-19 11:00:33.701170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.538 [2024-11-19 11:00:33.701189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.538 qpair failed and we were unable to recover it. 00:32:54.538 [2024-11-19 11:00:33.711055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.538 [2024-11-19 11:00:33.711130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.538 [2024-11-19 11:00:33.711146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.538 [2024-11-19 11:00:33.711154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.538 [2024-11-19 11:00:33.711169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.538 [2024-11-19 11:00:33.711187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.538 qpair failed and we were unable to recover it. 00:32:54.817 [2024-11-19 11:00:33.721100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.817 [2024-11-19 11:00:33.721167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.817 [2024-11-19 11:00:33.721185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.817 [2024-11-19 11:00:33.721193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.817 [2024-11-19 11:00:33.721200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.817 [2024-11-19 11:00:33.721218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.817 qpair failed and we were unable to recover it. 00:32:54.817 [2024-11-19 11:00:33.731090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.817 [2024-11-19 11:00:33.731157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.817 [2024-11-19 11:00:33.731179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.817 [2024-11-19 11:00:33.731188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.817 [2024-11-19 11:00:33.731195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.817 [2024-11-19 11:00:33.731213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.817 qpair failed and we were unable to recover it. 00:32:54.817 [2024-11-19 11:00:33.741184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.817 [2024-11-19 11:00:33.741259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.817 [2024-11-19 11:00:33.741276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.817 [2024-11-19 11:00:33.741284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.817 [2024-11-19 11:00:33.741291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.817 [2024-11-19 11:00:33.741309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.817 qpair failed and we were unable to recover it. 00:32:54.817 [2024-11-19 11:00:33.751154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.817 [2024-11-19 11:00:33.751217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.817 [2024-11-19 11:00:33.751233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.817 [2024-11-19 11:00:33.751242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.817 [2024-11-19 11:00:33.751249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.817 [2024-11-19 11:00:33.751266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.817 qpair failed and we were unable to recover it. 00:32:54.817 [2024-11-19 11:00:33.761194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.817 [2024-11-19 11:00:33.761261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.817 [2024-11-19 11:00:33.761282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.817 [2024-11-19 11:00:33.761290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.817 [2024-11-19 11:00:33.761298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.817 [2024-11-19 11:00:33.761315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.817 qpair failed and we were unable to recover it. 00:32:54.817 [2024-11-19 11:00:33.771100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.817 [2024-11-19 11:00:33.771182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.817 [2024-11-19 11:00:33.771198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.817 [2024-11-19 11:00:33.771206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.817 [2024-11-19 11:00:33.771213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.817 [2024-11-19 11:00:33.771231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.817 qpair failed and we were unable to recover it. 00:32:54.817 [2024-11-19 11:00:33.781288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.817 [2024-11-19 11:00:33.781362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.817 [2024-11-19 11:00:33.781379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.817 [2024-11-19 11:00:33.781387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.817 [2024-11-19 11:00:33.781394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.817 [2024-11-19 11:00:33.781412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.817 qpair failed and we were unable to recover it. 00:32:54.817 [2024-11-19 11:00:33.791182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.817 [2024-11-19 11:00:33.791251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.817 [2024-11-19 11:00:33.791269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.817 [2024-11-19 11:00:33.791276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.817 [2024-11-19 11:00:33.791284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.817 [2024-11-19 11:00:33.791301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.817 qpair failed and we were unable to recover it. 00:32:54.817 [2024-11-19 11:00:33.801328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.817 [2024-11-19 11:00:33.801393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.817 [2024-11-19 11:00:33.801409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.817 [2024-11-19 11:00:33.801417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.817 [2024-11-19 11:00:33.801430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.817 [2024-11-19 11:00:33.801447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.817 qpair failed and we were unable to recover it. 00:32:54.817 [2024-11-19 11:00:33.811355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.817 [2024-11-19 11:00:33.811429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.817 [2024-11-19 11:00:33.811444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.817 [2024-11-19 11:00:33.811452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.817 [2024-11-19 11:00:33.811459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.817 [2024-11-19 11:00:33.811476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.817 qpair failed and we were unable to recover it. 00:32:54.817 [2024-11-19 11:00:33.821366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.818 [2024-11-19 11:00:33.821433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.818 [2024-11-19 11:00:33.821447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.818 [2024-11-19 11:00:33.821454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.818 [2024-11-19 11:00:33.821461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.818 [2024-11-19 11:00:33.821477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.818 qpair failed and we were unable to recover it. 00:32:54.818 [2024-11-19 11:00:33.831278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.818 [2024-11-19 11:00:33.831337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.818 [2024-11-19 11:00:33.831351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.818 [2024-11-19 11:00:33.831358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.818 [2024-11-19 11:00:33.831365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.818 [2024-11-19 11:00:33.831380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.818 qpair failed and we were unable to recover it. 00:32:54.818 [2024-11-19 11:00:33.841392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.818 [2024-11-19 11:00:33.841448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.818 [2024-11-19 11:00:33.841461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.818 [2024-11-19 11:00:33.841469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.818 [2024-11-19 11:00:33.841475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.818 [2024-11-19 11:00:33.841490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.818 qpair failed and we were unable to recover it. 00:32:54.818 [2024-11-19 11:00:33.851443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.818 [2024-11-19 11:00:33.851517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.818 [2024-11-19 11:00:33.851531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.818 [2024-11-19 11:00:33.851539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.818 [2024-11-19 11:00:33.851546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.818 [2024-11-19 11:00:33.851561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.818 qpair failed and we were unable to recover it. 00:32:54.818 [2024-11-19 11:00:33.861505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.818 [2024-11-19 11:00:33.861593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.818 [2024-11-19 11:00:33.861607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.818 [2024-11-19 11:00:33.861615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.818 [2024-11-19 11:00:33.861622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.818 [2024-11-19 11:00:33.861637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.818 qpair failed and we were unable to recover it. 00:32:54.818 [2024-11-19 11:00:33.871505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.818 [2024-11-19 11:00:33.871566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.818 [2024-11-19 11:00:33.871581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.818 [2024-11-19 11:00:33.871588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.818 [2024-11-19 11:00:33.871595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.818 [2024-11-19 11:00:33.871611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.818 qpair failed and we were unable to recover it. 00:32:54.818 [2024-11-19 11:00:33.881548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.818 [2024-11-19 11:00:33.881619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.818 [2024-11-19 11:00:33.881635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.818 [2024-11-19 11:00:33.881643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.818 [2024-11-19 11:00:33.881650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.818 [2024-11-19 11:00:33.881666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.818 qpair failed and we were unable to recover it. 00:32:54.818 [2024-11-19 11:00:33.891563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.818 [2024-11-19 11:00:33.891633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.818 [2024-11-19 11:00:33.891653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.818 [2024-11-19 11:00:33.891661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.818 [2024-11-19 11:00:33.891668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.818 [2024-11-19 11:00:33.891684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.818 qpair failed and we were unable to recover it. 00:32:54.818 [2024-11-19 11:00:33.901514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.818 [2024-11-19 11:00:33.901605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.818 [2024-11-19 11:00:33.901622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.818 [2024-11-19 11:00:33.901630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.818 [2024-11-19 11:00:33.901638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.818 [2024-11-19 11:00:33.901655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.818 qpair failed and we were unable to recover it. 00:32:54.818 [2024-11-19 11:00:33.911642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.818 [2024-11-19 11:00:33.911712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.818 [2024-11-19 11:00:33.911728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.818 [2024-11-19 11:00:33.911737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.818 [2024-11-19 11:00:33.911744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.818 [2024-11-19 11:00:33.911765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.818 qpair failed and we were unable to recover it. 00:32:54.818 [2024-11-19 11:00:33.921669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.818 [2024-11-19 11:00:33.921730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.818 [2024-11-19 11:00:33.921746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.818 [2024-11-19 11:00:33.921754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.818 [2024-11-19 11:00:33.921761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.818 [2024-11-19 11:00:33.921778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.818 qpair failed and we were unable to recover it. 00:32:54.818 [2024-11-19 11:00:33.931704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.818 [2024-11-19 11:00:33.931771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.818 [2024-11-19 11:00:33.931787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.818 [2024-11-19 11:00:33.931800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.818 [2024-11-19 11:00:33.931807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.818 [2024-11-19 11:00:33.931825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.818 qpair failed and we were unable to recover it. 00:32:54.818 [2024-11-19 11:00:33.941745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.818 [2024-11-19 11:00:33.941854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.818 [2024-11-19 11:00:33.941870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.818 [2024-11-19 11:00:33.941879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.818 [2024-11-19 11:00:33.941886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.818 [2024-11-19 11:00:33.941905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.818 qpair failed and we were unable to recover it. 00:32:54.819 [2024-11-19 11:00:33.951752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.819 [2024-11-19 11:00:33.951831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.819 [2024-11-19 11:00:33.951848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.819 [2024-11-19 11:00:33.951856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.819 [2024-11-19 11:00:33.951864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.819 [2024-11-19 11:00:33.951880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.819 qpair failed and we were unable to recover it. 00:32:54.819 [2024-11-19 11:00:33.961791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.819 [2024-11-19 11:00:33.961857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.819 [2024-11-19 11:00:33.961874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.819 [2024-11-19 11:00:33.961882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.819 [2024-11-19 11:00:33.961889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.819 [2024-11-19 11:00:33.961906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.819 qpair failed and we were unable to recover it. 00:32:54.819 [2024-11-19 11:00:33.971717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.819 [2024-11-19 11:00:33.971790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.819 [2024-11-19 11:00:33.971806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.819 [2024-11-19 11:00:33.971814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.819 [2024-11-19 11:00:33.971821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.819 [2024-11-19 11:00:33.971844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.819 qpair failed and we were unable to recover it. 00:32:54.819 [2024-11-19 11:00:33.981891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.819 [2024-11-19 11:00:33.981968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.819 [2024-11-19 11:00:33.981984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.819 [2024-11-19 11:00:33.981992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.819 [2024-11-19 11:00:33.982000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.819 [2024-11-19 11:00:33.982016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.819 qpair failed and we were unable to recover it. 00:32:54.819 [2024-11-19 11:00:33.991878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.819 [2024-11-19 11:00:33.991940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.819 [2024-11-19 11:00:33.991956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.819 [2024-11-19 11:00:33.991964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.819 [2024-11-19 11:00:33.991971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.819 [2024-11-19 11:00:33.991989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.819 qpair failed and we were unable to recover it. 00:32:54.819 [2024-11-19 11:00:34.001810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.819 [2024-11-19 11:00:34.001882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.819 [2024-11-19 11:00:34.001898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.819 [2024-11-19 11:00:34.001907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.819 [2024-11-19 11:00:34.001914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:54.819 [2024-11-19 11:00:34.001931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.819 qpair failed and we were unable to recover it. 00:32:55.082 [2024-11-19 11:00:34.012010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.082 [2024-11-19 11:00:34.012118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.082 [2024-11-19 11:00:34.012134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.082 [2024-11-19 11:00:34.012142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.082 [2024-11-19 11:00:34.012148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.082 [2024-11-19 11:00:34.012173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.082 qpair failed and we were unable to recover it. 00:32:55.082 [2024-11-19 11:00:34.022028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.082 [2024-11-19 11:00:34.022147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.082 [2024-11-19 11:00:34.022167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.082 [2024-11-19 11:00:34.022175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.083 [2024-11-19 11:00:34.022182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.083 [2024-11-19 11:00:34.022201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.083 qpair failed and we were unable to recover it. 00:32:55.083 [2024-11-19 11:00:34.032023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.083 [2024-11-19 11:00:34.032086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.083 [2024-11-19 11:00:34.032101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.083 [2024-11-19 11:00:34.032110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.083 [2024-11-19 11:00:34.032117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.083 [2024-11-19 11:00:34.032134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.083 qpair failed and we were unable to recover it. 00:32:55.083 [2024-11-19 11:00:34.042054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.083 [2024-11-19 11:00:34.042122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.083 [2024-11-19 11:00:34.042139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.083 [2024-11-19 11:00:34.042147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.083 [2024-11-19 11:00:34.042154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.083 [2024-11-19 11:00:34.042178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.083 qpair failed and we were unable to recover it. 00:32:55.083 [2024-11-19 11:00:34.052020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.083 [2024-11-19 11:00:34.052086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.083 [2024-11-19 11:00:34.052105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.083 [2024-11-19 11:00:34.052114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.083 [2024-11-19 11:00:34.052121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.083 [2024-11-19 11:00:34.052139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.083 qpair failed and we were unable to recover it. 00:32:55.083 [2024-11-19 11:00:34.062120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.083 [2024-11-19 11:00:34.062199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.083 [2024-11-19 11:00:34.062215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.083 [2024-11-19 11:00:34.062228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.083 [2024-11-19 11:00:34.062235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.083 [2024-11-19 11:00:34.062253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.083 qpair failed and we were unable to recover it. 00:32:55.083 [2024-11-19 11:00:34.072119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.083 [2024-11-19 11:00:34.072191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.083 [2024-11-19 11:00:34.072208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.083 [2024-11-19 11:00:34.072216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.083 [2024-11-19 11:00:34.072224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.083 [2024-11-19 11:00:34.072241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.083 qpair failed and we were unable to recover it. 00:32:55.083 [2024-11-19 11:00:34.082193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.083 [2024-11-19 11:00:34.082287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.083 [2024-11-19 11:00:34.082303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.083 [2024-11-19 11:00:34.082311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.083 [2024-11-19 11:00:34.082319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.083 [2024-11-19 11:00:34.082337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.083 qpair failed and we were unable to recover it. 00:32:55.083 [2024-11-19 11:00:34.092178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.083 [2024-11-19 11:00:34.092291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.083 [2024-11-19 11:00:34.092308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.083 [2024-11-19 11:00:34.092316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.083 [2024-11-19 11:00:34.092323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.083 [2024-11-19 11:00:34.092342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.083 qpair failed and we were unable to recover it. 00:32:55.083 [2024-11-19 11:00:34.102272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.083 [2024-11-19 11:00:34.102349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.083 [2024-11-19 11:00:34.102365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.083 [2024-11-19 11:00:34.102374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.083 [2024-11-19 11:00:34.102380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.083 [2024-11-19 11:00:34.102403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.083 qpair failed and we were unable to recover it. 00:32:55.083 [2024-11-19 11:00:34.112219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.083 [2024-11-19 11:00:34.112315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.083 [2024-11-19 11:00:34.112331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.083 [2024-11-19 11:00:34.112340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.083 [2024-11-19 11:00:34.112347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.083 [2024-11-19 11:00:34.112364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.083 qpair failed and we were unable to recover it. 00:32:55.083 [2024-11-19 11:00:34.122280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.083 [2024-11-19 11:00:34.122339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.083 [2024-11-19 11:00:34.122355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.083 [2024-11-19 11:00:34.122363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.083 [2024-11-19 11:00:34.122370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.083 [2024-11-19 11:00:34.122388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.083 qpair failed and we were unable to recover it. 00:32:55.083 [2024-11-19 11:00:34.132332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.083 [2024-11-19 11:00:34.132431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.083 [2024-11-19 11:00:34.132447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.083 [2024-11-19 11:00:34.132455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.083 [2024-11-19 11:00:34.132462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.083 [2024-11-19 11:00:34.132480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.083 qpair failed and we were unable to recover it. 00:32:55.083 [2024-11-19 11:00:34.142383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.083 [2024-11-19 11:00:34.142486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.083 [2024-11-19 11:00:34.142502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.083 [2024-11-19 11:00:34.142510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.083 [2024-11-19 11:00:34.142518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.083 [2024-11-19 11:00:34.142535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.083 qpair failed and we were unable to recover it. 00:32:55.083 [2024-11-19 11:00:34.152362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.083 [2024-11-19 11:00:34.152422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.084 [2024-11-19 11:00:34.152438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.084 [2024-11-19 11:00:34.152446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.084 [2024-11-19 11:00:34.152454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.084 [2024-11-19 11:00:34.152471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.084 qpair failed and we were unable to recover it. 00:32:55.084 [2024-11-19 11:00:34.162433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.084 [2024-11-19 11:00:34.162496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.084 [2024-11-19 11:00:34.162512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.084 [2024-11-19 11:00:34.162520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.084 [2024-11-19 11:00:34.162528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.084 [2024-11-19 11:00:34.162545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.084 qpair failed and we were unable to recover it. 00:32:55.084 [2024-11-19 11:00:34.172457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.084 [2024-11-19 11:00:34.172526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.084 [2024-11-19 11:00:34.172542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.084 [2024-11-19 11:00:34.172550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.084 [2024-11-19 11:00:34.172557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.084 [2024-11-19 11:00:34.172574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.084 qpair failed and we were unable to recover it. 00:32:55.084 [2024-11-19 11:00:34.182528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.084 [2024-11-19 11:00:34.182604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.084 [2024-11-19 11:00:34.182621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.084 [2024-11-19 11:00:34.182629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.084 [2024-11-19 11:00:34.182636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.084 [2024-11-19 11:00:34.182652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.084 qpair failed and we were unable to recover it. 00:32:55.084 [2024-11-19 11:00:34.192556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.084 [2024-11-19 11:00:34.192618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.084 [2024-11-19 11:00:34.192639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.084 [2024-11-19 11:00:34.192647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.084 [2024-11-19 11:00:34.192654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.084 [2024-11-19 11:00:34.192671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.084 qpair failed and we were unable to recover it. 00:32:55.084 [2024-11-19 11:00:34.202530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.084 [2024-11-19 11:00:34.202590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.084 [2024-11-19 11:00:34.202606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.084 [2024-11-19 11:00:34.202614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.084 [2024-11-19 11:00:34.202621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.084 [2024-11-19 11:00:34.202638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.084 qpair failed and we were unable to recover it. 00:32:55.084 [2024-11-19 11:00:34.212554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.084 [2024-11-19 11:00:34.212625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.084 [2024-11-19 11:00:34.212641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.084 [2024-11-19 11:00:34.212649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.084 [2024-11-19 11:00:34.212657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.084 [2024-11-19 11:00:34.212673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.084 qpair failed and we were unable to recover it. 00:32:55.084 [2024-11-19 11:00:34.222654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.084 [2024-11-19 11:00:34.222735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.084 [2024-11-19 11:00:34.222751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.084 [2024-11-19 11:00:34.222759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.084 [2024-11-19 11:00:34.222766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.084 [2024-11-19 11:00:34.222782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.084 qpair failed and we were unable to recover it. 00:32:55.084 [2024-11-19 11:00:34.232615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.084 [2024-11-19 11:00:34.232683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.084 [2024-11-19 11:00:34.232699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.084 [2024-11-19 11:00:34.232708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.084 [2024-11-19 11:00:34.232720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.084 [2024-11-19 11:00:34.232738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.084 qpair failed and we were unable to recover it. 00:32:55.084 [2024-11-19 11:00:34.242660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.084 [2024-11-19 11:00:34.242725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.084 [2024-11-19 11:00:34.242742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.084 [2024-11-19 11:00:34.242749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.084 [2024-11-19 11:00:34.242757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.084 [2024-11-19 11:00:34.242774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.084 qpair failed and we were unable to recover it. 00:32:55.084 [2024-11-19 11:00:34.252740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.084 [2024-11-19 11:00:34.252821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.084 [2024-11-19 11:00:34.252837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.084 [2024-11-19 11:00:34.252845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.084 [2024-11-19 11:00:34.252852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.084 [2024-11-19 11:00:34.252869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.084 qpair failed and we were unable to recover it. 00:32:55.084 [2024-11-19 11:00:34.262767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.084 [2024-11-19 11:00:34.262834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.084 [2024-11-19 11:00:34.262852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.084 [2024-11-19 11:00:34.262860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.084 [2024-11-19 11:00:34.262867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.084 [2024-11-19 11:00:34.262885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.084 qpair failed and we were unable to recover it. 00:32:55.084 [2024-11-19 11:00:34.272801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.084 [2024-11-19 11:00:34.272913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.084 [2024-11-19 11:00:34.272940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.084 [2024-11-19 11:00:34.272949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.084 [2024-11-19 11:00:34.272957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.084 [2024-11-19 11:00:34.272977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.084 qpair failed and we were unable to recover it. 00:32:55.347 [2024-11-19 11:00:34.282741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.347 [2024-11-19 11:00:34.282799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.347 [2024-11-19 11:00:34.282816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.347 [2024-11-19 11:00:34.282825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.347 [2024-11-19 11:00:34.282833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.347 [2024-11-19 11:00:34.282852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.347 qpair failed and we were unable to recover it. 00:32:55.347 [2024-11-19 11:00:34.292813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.347 [2024-11-19 11:00:34.292915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.348 [2024-11-19 11:00:34.292932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.348 [2024-11-19 11:00:34.292941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.348 [2024-11-19 11:00:34.292950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.348 [2024-11-19 11:00:34.292968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.348 qpair failed and we were unable to recover it. 00:32:55.348 [2024-11-19 11:00:34.302884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.348 [2024-11-19 11:00:34.302960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.348 [2024-11-19 11:00:34.302977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.348 [2024-11-19 11:00:34.302985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.348 [2024-11-19 11:00:34.302993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.348 [2024-11-19 11:00:34.303011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.348 qpair failed and we were unable to recover it. 00:32:55.348 [2024-11-19 11:00:34.312833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.348 [2024-11-19 11:00:34.312949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.348 [2024-11-19 11:00:34.312984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.348 [2024-11-19 11:00:34.312995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.348 [2024-11-19 11:00:34.313003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.348 [2024-11-19 11:00:34.313028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.348 qpair failed and we were unable to recover it. 00:32:55.348 [2024-11-19 11:00:34.322872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.348 [2024-11-19 11:00:34.322937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.348 [2024-11-19 11:00:34.322964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.348 [2024-11-19 11:00:34.322974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.348 [2024-11-19 11:00:34.322981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.348 [2024-11-19 11:00:34.323000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.348 qpair failed and we were unable to recover it. 00:32:55.348 [2024-11-19 11:00:34.332931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.348 [2024-11-19 11:00:34.333039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.348 [2024-11-19 11:00:34.333057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.348 [2024-11-19 11:00:34.333065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.348 [2024-11-19 11:00:34.333073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.348 [2024-11-19 11:00:34.333092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.348 qpair failed and we were unable to recover it. 00:32:55.348 [2024-11-19 11:00:34.342996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.348 [2024-11-19 11:00:34.343063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.348 [2024-11-19 11:00:34.343079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.348 [2024-11-19 11:00:34.343087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.348 [2024-11-19 11:00:34.343094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.348 [2024-11-19 11:00:34.343113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.348 qpair failed and we were unable to recover it. 00:32:55.348 [2024-11-19 11:00:34.352983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.348 [2024-11-19 11:00:34.353045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.348 [2024-11-19 11:00:34.353061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.348 [2024-11-19 11:00:34.353070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.348 [2024-11-19 11:00:34.353077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.348 [2024-11-19 11:00:34.353094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.348 qpair failed and we were unable to recover it. 00:32:55.348 [2024-11-19 11:00:34.362996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.348 [2024-11-19 11:00:34.363094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.348 [2024-11-19 11:00:34.363112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.348 [2024-11-19 11:00:34.363120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.348 [2024-11-19 11:00:34.363132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.348 [2024-11-19 11:00:34.363151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.348 qpair failed and we were unable to recover it. 00:32:55.348 [2024-11-19 11:00:34.373067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.348 [2024-11-19 11:00:34.373148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.348 [2024-11-19 11:00:34.373171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.348 [2024-11-19 11:00:34.373179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.348 [2024-11-19 11:00:34.373186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.348 [2024-11-19 11:00:34.373208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.348 qpair failed and we were unable to recover it. 00:32:55.348 [2024-11-19 11:00:34.383095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.348 [2024-11-19 11:00:34.383175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.348 [2024-11-19 11:00:34.383192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.348 [2024-11-19 11:00:34.383200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.348 [2024-11-19 11:00:34.383207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.348 [2024-11-19 11:00:34.383224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.348 qpair failed and we were unable to recover it. 00:32:55.348 [2024-11-19 11:00:34.393091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.348 [2024-11-19 11:00:34.393177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.348 [2024-11-19 11:00:34.393196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.348 [2024-11-19 11:00:34.393204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.348 [2024-11-19 11:00:34.393212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.348 [2024-11-19 11:00:34.393230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.348 qpair failed and we were unable to recover it. 00:32:55.348 [2024-11-19 11:00:34.403120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.348 [2024-11-19 11:00:34.403199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.348 [2024-11-19 11:00:34.403217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.348 [2024-11-19 11:00:34.403226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.348 [2024-11-19 11:00:34.403233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.348 [2024-11-19 11:00:34.403251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.348 qpair failed and we were unable to recover it. 00:32:55.348 [2024-11-19 11:00:34.413112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.348 [2024-11-19 11:00:34.413189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.348 [2024-11-19 11:00:34.413207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.348 [2024-11-19 11:00:34.413215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.349 [2024-11-19 11:00:34.413222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.349 [2024-11-19 11:00:34.413239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.349 qpair failed and we were unable to recover it. 00:32:55.349 [2024-11-19 11:00:34.423222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.349 [2024-11-19 11:00:34.423299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.349 [2024-11-19 11:00:34.423315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.349 [2024-11-19 11:00:34.423323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.349 [2024-11-19 11:00:34.423330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.349 [2024-11-19 11:00:34.423347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.349 qpair failed and we were unable to recover it. 00:32:55.349 [2024-11-19 11:00:34.433182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.349 [2024-11-19 11:00:34.433245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.349 [2024-11-19 11:00:34.433262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.349 [2024-11-19 11:00:34.433270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.349 [2024-11-19 11:00:34.433277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.349 [2024-11-19 11:00:34.433295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.349 qpair failed and we were unable to recover it. 00:32:55.349 [2024-11-19 11:00:34.443225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.349 [2024-11-19 11:00:34.443301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.349 [2024-11-19 11:00:34.443318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.349 [2024-11-19 11:00:34.443326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.349 [2024-11-19 11:00:34.443333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.349 [2024-11-19 11:00:34.443351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.349 qpair failed and we were unable to recover it. 00:32:55.349 [2024-11-19 11:00:34.453175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.349 [2024-11-19 11:00:34.453256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.349 [2024-11-19 11:00:34.453278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.349 [2024-11-19 11:00:34.453287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.349 [2024-11-19 11:00:34.453293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.349 [2024-11-19 11:00:34.453311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.349 qpair failed and we were unable to recover it. 00:32:55.349 [2024-11-19 11:00:34.463309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.349 [2024-11-19 11:00:34.463383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.349 [2024-11-19 11:00:34.463400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.349 [2024-11-19 11:00:34.463409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.349 [2024-11-19 11:00:34.463416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.349 [2024-11-19 11:00:34.463434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.349 qpair failed and we were unable to recover it. 00:32:55.349 [2024-11-19 11:00:34.473357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.349 [2024-11-19 11:00:34.473423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.349 [2024-11-19 11:00:34.473439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.349 [2024-11-19 11:00:34.473448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.349 [2024-11-19 11:00:34.473455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.349 [2024-11-19 11:00:34.473472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.349 qpair failed and we were unable to recover it. 00:32:55.349 [2024-11-19 11:00:34.483371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.349 [2024-11-19 11:00:34.483433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.349 [2024-11-19 11:00:34.483450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.349 [2024-11-19 11:00:34.483458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.349 [2024-11-19 11:00:34.483465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.349 [2024-11-19 11:00:34.483481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.349 qpair failed and we were unable to recover it. 00:32:55.349 [2024-11-19 11:00:34.493404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.349 [2024-11-19 11:00:34.493472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.349 [2024-11-19 11:00:34.493488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.349 [2024-11-19 11:00:34.493501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.349 [2024-11-19 11:00:34.493509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.349 [2024-11-19 11:00:34.493527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.349 qpair failed and we were unable to recover it. 00:32:55.349 [2024-11-19 11:00:34.503450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.349 [2024-11-19 11:00:34.503517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.349 [2024-11-19 11:00:34.503533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.349 [2024-11-19 11:00:34.503542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.349 [2024-11-19 11:00:34.503549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.349 [2024-11-19 11:00:34.503567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.349 qpair failed and we were unable to recover it. 00:32:55.349 [2024-11-19 11:00:34.513478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.349 [2024-11-19 11:00:34.513549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.349 [2024-11-19 11:00:34.513566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.349 [2024-11-19 11:00:34.513574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.349 [2024-11-19 11:00:34.513581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.349 [2024-11-19 11:00:34.513600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.349 qpair failed and we were unable to recover it. 00:32:55.349 [2024-11-19 11:00:34.523473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.349 [2024-11-19 11:00:34.523540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.349 [2024-11-19 11:00:34.523556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.349 [2024-11-19 11:00:34.523564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.349 [2024-11-19 11:00:34.523571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.349 [2024-11-19 11:00:34.523588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.349 qpair failed and we were unable to recover it. 00:32:55.349 [2024-11-19 11:00:34.533521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.349 [2024-11-19 11:00:34.533600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.349 [2024-11-19 11:00:34.533616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.349 [2024-11-19 11:00:34.533624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.349 [2024-11-19 11:00:34.533632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.350 [2024-11-19 11:00:34.533656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.350 qpair failed and we were unable to recover it. 00:32:55.613 [2024-11-19 11:00:34.543593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.613 [2024-11-19 11:00:34.543671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.613 [2024-11-19 11:00:34.543689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.613 [2024-11-19 11:00:34.543697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.613 [2024-11-19 11:00:34.543705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.613 [2024-11-19 11:00:34.543722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.613 qpair failed and we were unable to recover it. 00:32:55.613 [2024-11-19 11:00:34.553476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.613 [2024-11-19 11:00:34.553542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.613 [2024-11-19 11:00:34.553559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.613 [2024-11-19 11:00:34.553567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.613 [2024-11-19 11:00:34.553574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.613 [2024-11-19 11:00:34.553591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.613 qpair failed and we were unable to recover it. 00:32:55.613 [2024-11-19 11:00:34.563617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.613 [2024-11-19 11:00:34.563683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.613 [2024-11-19 11:00:34.563700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.613 [2024-11-19 11:00:34.563708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.613 [2024-11-19 11:00:34.563715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.613 [2024-11-19 11:00:34.563732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.613 qpair failed and we were unable to recover it. 00:32:55.613 [2024-11-19 11:00:34.573653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.613 [2024-11-19 11:00:34.573723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.613 [2024-11-19 11:00:34.573739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.613 [2024-11-19 11:00:34.573747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.613 [2024-11-19 11:00:34.573754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.613 [2024-11-19 11:00:34.573771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.613 qpair failed and we were unable to recover it. 00:32:55.613 [2024-11-19 11:00:34.583702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.613 [2024-11-19 11:00:34.583829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.613 [2024-11-19 11:00:34.583847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.613 [2024-11-19 11:00:34.583855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.613 [2024-11-19 11:00:34.583863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.613 [2024-11-19 11:00:34.583881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.613 qpair failed and we were unable to recover it. 00:32:55.613 [2024-11-19 11:00:34.593707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.613 [2024-11-19 11:00:34.593772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.613 [2024-11-19 11:00:34.593789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.613 [2024-11-19 11:00:34.593796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.613 [2024-11-19 11:00:34.593803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.613 [2024-11-19 11:00:34.593821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.613 qpair failed and we were unable to recover it. 00:32:55.613 [2024-11-19 11:00:34.603731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.613 [2024-11-19 11:00:34.603801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.613 [2024-11-19 11:00:34.603817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.613 [2024-11-19 11:00:34.603825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.613 [2024-11-19 11:00:34.603832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.613 [2024-11-19 11:00:34.603849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.613 qpair failed and we were unable to recover it. 00:32:55.613 [2024-11-19 11:00:34.613752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.613 [2024-11-19 11:00:34.613866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.613 [2024-11-19 11:00:34.613884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.613 [2024-11-19 11:00:34.613892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.613 [2024-11-19 11:00:34.613899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.613 [2024-11-19 11:00:34.613916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.613 qpair failed and we were unable to recover it. 00:32:55.613 [2024-11-19 11:00:34.623692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.613 [2024-11-19 11:00:34.623770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.613 [2024-11-19 11:00:34.623786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.613 [2024-11-19 11:00:34.623814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.613 [2024-11-19 11:00:34.623822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.613 [2024-11-19 11:00:34.623839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.613 qpair failed and we were unable to recover it. 00:32:55.613 [2024-11-19 11:00:34.633801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.613 [2024-11-19 11:00:34.633861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.613 [2024-11-19 11:00:34.633877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.613 [2024-11-19 11:00:34.633885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.613 [2024-11-19 11:00:34.633892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.613 [2024-11-19 11:00:34.633909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.613 qpair failed and we were unable to recover it. 00:32:55.613 [2024-11-19 11:00:34.643850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.613 [2024-11-19 11:00:34.643955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.613 [2024-11-19 11:00:34.643973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.614 [2024-11-19 11:00:34.643981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.614 [2024-11-19 11:00:34.643988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.614 [2024-11-19 11:00:34.644005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.614 qpair failed and we were unable to recover it. 00:32:55.614 [2024-11-19 11:00:34.653806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.614 [2024-11-19 11:00:34.653893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.614 [2024-11-19 11:00:34.653930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.614 [2024-11-19 11:00:34.653940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.614 [2024-11-19 11:00:34.653947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.614 [2024-11-19 11:00:34.653974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.614 qpair failed and we were unable to recover it. 00:32:55.614 [2024-11-19 11:00:34.663952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.614 [2024-11-19 11:00:34.664039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.614 [2024-11-19 11:00:34.664075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.614 [2024-11-19 11:00:34.664087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.614 [2024-11-19 11:00:34.664096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.614 [2024-11-19 11:00:34.664127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.614 qpair failed and we were unable to recover it. 00:32:55.614 [2024-11-19 11:00:34.673956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.614 [2024-11-19 11:00:34.674022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.614 [2024-11-19 11:00:34.674041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.614 [2024-11-19 11:00:34.674050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.614 [2024-11-19 11:00:34.674057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.614 [2024-11-19 11:00:34.674077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.614 qpair failed and we were unable to recover it. 00:32:55.614 [2024-11-19 11:00:34.683945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.614 [2024-11-19 11:00:34.684014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.614 [2024-11-19 11:00:34.684031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.614 [2024-11-19 11:00:34.684040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.614 [2024-11-19 11:00:34.684047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.614 [2024-11-19 11:00:34.684066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.614 qpair failed and we were unable to recover it. 00:32:55.614 [2024-11-19 11:00:34.693994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.614 [2024-11-19 11:00:34.694066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.614 [2024-11-19 11:00:34.694083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.614 [2024-11-19 11:00:34.694092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.614 [2024-11-19 11:00:34.694099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.614 [2024-11-19 11:00:34.694117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.614 qpair failed and we were unable to recover it. 00:32:55.614 [2024-11-19 11:00:34.704050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.614 [2024-11-19 11:00:34.704150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.614 [2024-11-19 11:00:34.704174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.614 [2024-11-19 11:00:34.704183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.614 [2024-11-19 11:00:34.704191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.614 [2024-11-19 11:00:34.704208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.614 qpair failed and we were unable to recover it. 00:32:55.614 [2024-11-19 11:00:34.714058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.614 [2024-11-19 11:00:34.714131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.614 [2024-11-19 11:00:34.714149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.614 [2024-11-19 11:00:34.714165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.614 [2024-11-19 11:00:34.714174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.614 [2024-11-19 11:00:34.714193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.614 qpair failed and we were unable to recover it. 00:32:55.614 [2024-11-19 11:00:34.724095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.614 [2024-11-19 11:00:34.724154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.614 [2024-11-19 11:00:34.724178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.614 [2024-11-19 11:00:34.724186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.614 [2024-11-19 11:00:34.724193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.614 [2024-11-19 11:00:34.724211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.614 qpair failed and we were unable to recover it. 00:32:55.614 [2024-11-19 11:00:34.734118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.614 [2024-11-19 11:00:34.734197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.614 [2024-11-19 11:00:34.734214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.614 [2024-11-19 11:00:34.734222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.614 [2024-11-19 11:00:34.734229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.614 [2024-11-19 11:00:34.734246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.614 qpair failed and we were unable to recover it. 00:32:55.614 [2024-11-19 11:00:34.744187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.614 [2024-11-19 11:00:34.744259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.614 [2024-11-19 11:00:34.744275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.614 [2024-11-19 11:00:34.744284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.614 [2024-11-19 11:00:34.744291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.614 [2024-11-19 11:00:34.744308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.614 qpair failed and we were unable to recover it. 00:32:55.614 [2024-11-19 11:00:34.754195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.614 [2024-11-19 11:00:34.754302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.614 [2024-11-19 11:00:34.754323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.614 [2024-11-19 11:00:34.754331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.614 [2024-11-19 11:00:34.754338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.614 [2024-11-19 11:00:34.754356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.614 qpair failed and we were unable to recover it. 00:32:55.614 [2024-11-19 11:00:34.764207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.614 [2024-11-19 11:00:34.764271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.614 [2024-11-19 11:00:34.764287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.614 [2024-11-19 11:00:34.764296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.614 [2024-11-19 11:00:34.764303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.614 [2024-11-19 11:00:34.764321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.614 qpair failed and we were unable to recover it. 00:32:55.614 [2024-11-19 11:00:34.774229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.614 [2024-11-19 11:00:34.774303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.614 [2024-11-19 11:00:34.774319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.615 [2024-11-19 11:00:34.774327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.615 [2024-11-19 11:00:34.774335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.615 [2024-11-19 11:00:34.774353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.615 qpair failed and we were unable to recover it. 00:32:55.615 [2024-11-19 11:00:34.784283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.615 [2024-11-19 11:00:34.784352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.615 [2024-11-19 11:00:34.784368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.615 [2024-11-19 11:00:34.784376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.615 [2024-11-19 11:00:34.784383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.615 [2024-11-19 11:00:34.784401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.615 qpair failed and we were unable to recover it. 00:32:55.615 [2024-11-19 11:00:34.794325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.615 [2024-11-19 11:00:34.794396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.615 [2024-11-19 11:00:34.794413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.615 [2024-11-19 11:00:34.794421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.615 [2024-11-19 11:00:34.794435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.615 [2024-11-19 11:00:34.794453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.615 qpair failed and we were unable to recover it. 00:32:55.615 [2024-11-19 11:00:34.804333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.615 [2024-11-19 11:00:34.804409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.615 [2024-11-19 11:00:34.804426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.615 [2024-11-19 11:00:34.804434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.615 [2024-11-19 11:00:34.804441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.615 [2024-11-19 11:00:34.804458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.615 qpair failed and we were unable to recover it. 00:32:55.877 [2024-11-19 11:00:34.814371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.877 [2024-11-19 11:00:34.814444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.877 [2024-11-19 11:00:34.814460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.877 [2024-11-19 11:00:34.814468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.877 [2024-11-19 11:00:34.814476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.877 [2024-11-19 11:00:34.814493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.877 qpair failed and we were unable to recover it. 00:32:55.877 [2024-11-19 11:00:34.824419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.877 [2024-11-19 11:00:34.824498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.877 [2024-11-19 11:00:34.824514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.877 [2024-11-19 11:00:34.824522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.877 [2024-11-19 11:00:34.824529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.877 [2024-11-19 11:00:34.824546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.877 qpair failed and we were unable to recover it. 00:32:55.877 [2024-11-19 11:00:34.834457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.877 [2024-11-19 11:00:34.834553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.877 [2024-11-19 11:00:34.834569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.877 [2024-11-19 11:00:34.834579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.877 [2024-11-19 11:00:34.834587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.877 [2024-11-19 11:00:34.834604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.877 qpair failed and we were unable to recover it. 00:32:55.877 [2024-11-19 11:00:34.844466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.877 [2024-11-19 11:00:34.844534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.877 [2024-11-19 11:00:34.844551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.877 [2024-11-19 11:00:34.844559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.877 [2024-11-19 11:00:34.844566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.877 [2024-11-19 11:00:34.844583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.877 qpair failed and we were unable to recover it. 00:32:55.877 [2024-11-19 11:00:34.854510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.877 [2024-11-19 11:00:34.854582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.877 [2024-11-19 11:00:34.854598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.877 [2024-11-19 11:00:34.854606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.877 [2024-11-19 11:00:34.854613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.877 [2024-11-19 11:00:34.854631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.877 qpair failed and we were unable to recover it. 00:32:55.877 [2024-11-19 11:00:34.864568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.877 [2024-11-19 11:00:34.864643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.877 [2024-11-19 11:00:34.864659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.877 [2024-11-19 11:00:34.864667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.878 [2024-11-19 11:00:34.864674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.878 [2024-11-19 11:00:34.864691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.878 qpair failed and we were unable to recover it. 00:32:55.878 [2024-11-19 11:00:34.874575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.878 [2024-11-19 11:00:34.874641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.878 [2024-11-19 11:00:34.874657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.878 [2024-11-19 11:00:34.874665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.878 [2024-11-19 11:00:34.874672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.878 [2024-11-19 11:00:34.874689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.878 qpair failed and we were unable to recover it. 00:32:55.878 [2024-11-19 11:00:34.884586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.878 [2024-11-19 11:00:34.884648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.878 [2024-11-19 11:00:34.884670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.878 [2024-11-19 11:00:34.884679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.878 [2024-11-19 11:00:34.884686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.878 [2024-11-19 11:00:34.884706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.878 qpair failed and we were unable to recover it. 00:32:55.878 [2024-11-19 11:00:34.894621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.878 [2024-11-19 11:00:34.894692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.878 [2024-11-19 11:00:34.894710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.878 [2024-11-19 11:00:34.894718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.878 [2024-11-19 11:00:34.894725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.878 [2024-11-19 11:00:34.894743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.878 qpair failed and we were unable to recover it. 00:32:55.878 [2024-11-19 11:00:34.904701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.878 [2024-11-19 11:00:34.904777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.878 [2024-11-19 11:00:34.904794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.878 [2024-11-19 11:00:34.904802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.878 [2024-11-19 11:00:34.904809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.878 [2024-11-19 11:00:34.904826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.878 qpair failed and we were unable to recover it. 00:32:55.878 [2024-11-19 11:00:34.914692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.878 [2024-11-19 11:00:34.914760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.878 [2024-11-19 11:00:34.914776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.878 [2024-11-19 11:00:34.914784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.878 [2024-11-19 11:00:34.914792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.878 [2024-11-19 11:00:34.914809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.878 qpair failed and we were unable to recover it. 00:32:55.878 [2024-11-19 11:00:34.924741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.878 [2024-11-19 11:00:34.924811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.878 [2024-11-19 11:00:34.924827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.878 [2024-11-19 11:00:34.924835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.878 [2024-11-19 11:00:34.924847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.878 [2024-11-19 11:00:34.924866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.878 qpair failed and we were unable to recover it. 00:32:55.878 [2024-11-19 11:00:34.934798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.878 [2024-11-19 11:00:34.934870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.878 [2024-11-19 11:00:34.934886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.878 [2024-11-19 11:00:34.934894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.878 [2024-11-19 11:00:34.934901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.878 [2024-11-19 11:00:34.934918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.878 qpair failed and we were unable to recover it. 00:32:55.878 [2024-11-19 11:00:34.944825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.878 [2024-11-19 11:00:34.944902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.878 [2024-11-19 11:00:34.944936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.878 [2024-11-19 11:00:34.944948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.878 [2024-11-19 11:00:34.944957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.878 [2024-11-19 11:00:34.944981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.878 qpair failed and we were unable to recover it. 00:32:55.878 [2024-11-19 11:00:34.954837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.878 [2024-11-19 11:00:34.954940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.878 [2024-11-19 11:00:34.954978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.878 [2024-11-19 11:00:34.954989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.878 [2024-11-19 11:00:34.954997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.878 [2024-11-19 11:00:34.955022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.878 qpair failed and we were unable to recover it. 00:32:55.878 [2024-11-19 11:00:34.964856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.878 [2024-11-19 11:00:34.964927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.878 [2024-11-19 11:00:34.964948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.878 [2024-11-19 11:00:34.964957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.878 [2024-11-19 11:00:34.964964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.878 [2024-11-19 11:00:34.964984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.878 qpair failed and we were unable to recover it. 00:32:55.878 [2024-11-19 11:00:34.974854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.878 [2024-11-19 11:00:34.974921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.878 [2024-11-19 11:00:34.974939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.878 [2024-11-19 11:00:34.974947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.878 [2024-11-19 11:00:34.974955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.878 [2024-11-19 11:00:34.974974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.878 qpair failed and we were unable to recover it. 00:32:55.878 [2024-11-19 11:00:34.984947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.878 [2024-11-19 11:00:34.985022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.878 [2024-11-19 11:00:34.985039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.878 [2024-11-19 11:00:34.985048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.878 [2024-11-19 11:00:34.985055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.878 [2024-11-19 11:00:34.985074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.878 qpair failed and we were unable to recover it. 00:32:55.878 [2024-11-19 11:00:34.994962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.878 [2024-11-19 11:00:34.995062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.878 [2024-11-19 11:00:34.995078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.878 [2024-11-19 11:00:34.995087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.879 [2024-11-19 11:00:34.995094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.879 [2024-11-19 11:00:34.995113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.879 qpair failed and we were unable to recover it. 00:32:55.879 [2024-11-19 11:00:35.004952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.879 [2024-11-19 11:00:35.005018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.879 [2024-11-19 11:00:35.005035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.879 [2024-11-19 11:00:35.005043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.879 [2024-11-19 11:00:35.005050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.879 [2024-11-19 11:00:35.005068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.879 qpair failed and we were unable to recover it. 00:32:55.879 [2024-11-19 11:00:35.014991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.879 [2024-11-19 11:00:35.015060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.879 [2024-11-19 11:00:35.015082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.879 [2024-11-19 11:00:35.015089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.879 [2024-11-19 11:00:35.015096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.879 [2024-11-19 11:00:35.015114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.879 qpair failed and we were unable to recover it. 00:32:55.879 [2024-11-19 11:00:35.025045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.879 [2024-11-19 11:00:35.025129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.879 [2024-11-19 11:00:35.025145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.879 [2024-11-19 11:00:35.025153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.879 [2024-11-19 11:00:35.025166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.879 [2024-11-19 11:00:35.025183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.879 qpair failed and we were unable to recover it. 00:32:55.879 [2024-11-19 11:00:35.035030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.879 [2024-11-19 11:00:35.035086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.879 [2024-11-19 11:00:35.035100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.879 [2024-11-19 11:00:35.035108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.879 [2024-11-19 11:00:35.035115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.879 [2024-11-19 11:00:35.035130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.879 qpair failed and we were unable to recover it. 00:32:55.879 [2024-11-19 11:00:35.045061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.879 [2024-11-19 11:00:35.045118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.879 [2024-11-19 11:00:35.045132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.879 [2024-11-19 11:00:35.045140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.879 [2024-11-19 11:00:35.045147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.879 [2024-11-19 11:00:35.045167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.879 qpair failed and we were unable to recover it. 00:32:55.879 [2024-11-19 11:00:35.054976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.879 [2024-11-19 11:00:35.055035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.879 [2024-11-19 11:00:35.055049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.879 [2024-11-19 11:00:35.055060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.879 [2024-11-19 11:00:35.055067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.879 [2024-11-19 11:00:35.055083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.879 qpair failed and we were unable to recover it. 00:32:55.879 [2024-11-19 11:00:35.065076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:55.879 [2024-11-19 11:00:35.065132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:55.879 [2024-11-19 11:00:35.065145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:55.879 [2024-11-19 11:00:35.065153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:55.879 [2024-11-19 11:00:35.065164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:55.879 [2024-11-19 11:00:35.065180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:55.879 qpair failed and we were unable to recover it. 00:32:56.205 [2024-11-19 11:00:35.075114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.205 [2024-11-19 11:00:35.075205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.205 [2024-11-19 11:00:35.075219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.205 [2024-11-19 11:00:35.075227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.205 [2024-11-19 11:00:35.075234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.205 [2024-11-19 11:00:35.075249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.205 qpair failed and we were unable to recover it. 00:32:56.206 [2024-11-19 11:00:35.085145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.206 [2024-11-19 11:00:35.085212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.206 [2024-11-19 11:00:35.085246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.206 [2024-11-19 11:00:35.085255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.206 [2024-11-19 11:00:35.085262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.206 [2024-11-19 11:00:35.085288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.206 qpair failed and we were unable to recover it. 00:32:56.206 [2024-11-19 11:00:35.095185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.206 [2024-11-19 11:00:35.095266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.206 [2024-11-19 11:00:35.095281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.206 [2024-11-19 11:00:35.095289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.206 [2024-11-19 11:00:35.095296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.206 [2024-11-19 11:00:35.095315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.206 qpair failed and we were unable to recover it. 00:32:56.206 [2024-11-19 11:00:35.105187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.206 [2024-11-19 11:00:35.105243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.206 [2024-11-19 11:00:35.105257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.206 [2024-11-19 11:00:35.105265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.206 [2024-11-19 11:00:35.105271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.206 [2024-11-19 11:00:35.105286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.206 qpair failed and we were unable to recover it. 00:32:56.206 [2024-11-19 11:00:35.115238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.206 [2024-11-19 11:00:35.115290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.206 [2024-11-19 11:00:35.115303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.206 [2024-11-19 11:00:35.115310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.206 [2024-11-19 11:00:35.115317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.206 [2024-11-19 11:00:35.115331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.206 qpair failed and we were unable to recover it. 00:32:56.206 [2024-11-19 11:00:35.125243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.206 [2024-11-19 11:00:35.125296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.206 [2024-11-19 11:00:35.125309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.206 [2024-11-19 11:00:35.125316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.206 [2024-11-19 11:00:35.125323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.206 [2024-11-19 11:00:35.125337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.206 qpair failed and we were unable to recover it. 00:32:56.206 [2024-11-19 11:00:35.135303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.206 [2024-11-19 11:00:35.135356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.206 [2024-11-19 11:00:35.135369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.206 [2024-11-19 11:00:35.135377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.206 [2024-11-19 11:00:35.135383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.206 [2024-11-19 11:00:35.135398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.206 qpair failed and we were unable to recover it. 00:32:56.206 [2024-11-19 11:00:35.145302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.206 [2024-11-19 11:00:35.145358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.206 [2024-11-19 11:00:35.145371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.206 [2024-11-19 11:00:35.145378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.206 [2024-11-19 11:00:35.145385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.206 [2024-11-19 11:00:35.145399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.206 qpair failed and we were unable to recover it. 00:32:56.206 [2024-11-19 11:00:35.155360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.206 [2024-11-19 11:00:35.155416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.206 [2024-11-19 11:00:35.155429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.206 [2024-11-19 11:00:35.155437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.206 [2024-11-19 11:00:35.155444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.206 [2024-11-19 11:00:35.155459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.206 qpair failed and we were unable to recover it. 00:32:56.206 [2024-11-19 11:00:35.165380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.206 [2024-11-19 11:00:35.165432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.206 [2024-11-19 11:00:35.165445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.206 [2024-11-19 11:00:35.165452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.206 [2024-11-19 11:00:35.165459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.206 [2024-11-19 11:00:35.165473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.206 qpair failed and we were unable to recover it. 00:32:56.206 [2024-11-19 11:00:35.175468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.206 [2024-11-19 11:00:35.175526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.206 [2024-11-19 11:00:35.175539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.206 [2024-11-19 11:00:35.175546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.206 [2024-11-19 11:00:35.175553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.206 [2024-11-19 11:00:35.175568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.206 qpair failed and we were unable to recover it. 00:32:56.206 [2024-11-19 11:00:35.185417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.206 [2024-11-19 11:00:35.185466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.206 [2024-11-19 11:00:35.185479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.206 [2024-11-19 11:00:35.185490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.206 [2024-11-19 11:00:35.185496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.206 [2024-11-19 11:00:35.185511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.206 qpair failed and we were unable to recover it. 00:32:56.206 [2024-11-19 11:00:35.195449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.206 [2024-11-19 11:00:35.195506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.206 [2024-11-19 11:00:35.195519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.206 [2024-11-19 11:00:35.195526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.206 [2024-11-19 11:00:35.195533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.206 [2024-11-19 11:00:35.195547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.206 qpair failed and we were unable to recover it. 00:32:56.206 [2024-11-19 11:00:35.205498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.207 [2024-11-19 11:00:35.205556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.207 [2024-11-19 11:00:35.205569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.207 [2024-11-19 11:00:35.205576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.207 [2024-11-19 11:00:35.205583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.207 [2024-11-19 11:00:35.205598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.207 qpair failed and we were unable to recover it. 00:32:56.207 [2024-11-19 11:00:35.215519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.207 [2024-11-19 11:00:35.215571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.207 [2024-11-19 11:00:35.215584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.207 [2024-11-19 11:00:35.215591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.207 [2024-11-19 11:00:35.215598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.207 [2024-11-19 11:00:35.215612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.207 qpair failed and we were unable to recover it. 00:32:56.207 [2024-11-19 11:00:35.225510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.207 [2024-11-19 11:00:35.225556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.207 [2024-11-19 11:00:35.225569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.207 [2024-11-19 11:00:35.225577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.207 [2024-11-19 11:00:35.225583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.207 [2024-11-19 11:00:35.225601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.207 qpair failed and we were unable to recover it. 00:32:56.207 [2024-11-19 11:00:35.235460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.207 [2024-11-19 11:00:35.235516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.207 [2024-11-19 11:00:35.235529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.207 [2024-11-19 11:00:35.235536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.207 [2024-11-19 11:00:35.235543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.207 [2024-11-19 11:00:35.235557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.207 qpair failed and we were unable to recover it. 00:32:56.207 [2024-11-19 11:00:35.245583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.207 [2024-11-19 11:00:35.245635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.207 [2024-11-19 11:00:35.245648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.207 [2024-11-19 11:00:35.245656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.207 [2024-11-19 11:00:35.245662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.207 [2024-11-19 11:00:35.245676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.207 qpair failed and we were unable to recover it. 00:32:56.207 [2024-11-19 11:00:35.255747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.207 [2024-11-19 11:00:35.255807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.207 [2024-11-19 11:00:35.255821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.207 [2024-11-19 11:00:35.255828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.207 [2024-11-19 11:00:35.255835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.207 [2024-11-19 11:00:35.255849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.207 qpair failed and we were unable to recover it. 00:32:56.207 [2024-11-19 11:00:35.265622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.207 [2024-11-19 11:00:35.265671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.207 [2024-11-19 11:00:35.265684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.207 [2024-11-19 11:00:35.265691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.207 [2024-11-19 11:00:35.265698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.207 [2024-11-19 11:00:35.265712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.207 qpair failed and we were unable to recover it. 00:32:56.207 [2024-11-19 11:00:35.275679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.207 [2024-11-19 11:00:35.275730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.207 [2024-11-19 11:00:35.275743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.207 [2024-11-19 11:00:35.275750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.207 [2024-11-19 11:00:35.275757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.207 [2024-11-19 11:00:35.275771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.207 qpair failed and we were unable to recover it. 00:32:56.207 [2024-11-19 11:00:35.285691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.207 [2024-11-19 11:00:35.285741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.207 [2024-11-19 11:00:35.285754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.207 [2024-11-19 11:00:35.285761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.207 [2024-11-19 11:00:35.285768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.207 [2024-11-19 11:00:35.285782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.207 qpair failed and we were unable to recover it. 00:32:56.207 [2024-11-19 11:00:35.295714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.207 [2024-11-19 11:00:35.295769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.207 [2024-11-19 11:00:35.295782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.207 [2024-11-19 11:00:35.295789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.207 [2024-11-19 11:00:35.295796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.207 [2024-11-19 11:00:35.295810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.207 qpair failed and we were unable to recover it. 00:32:56.207 [2024-11-19 11:00:35.305721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.207 [2024-11-19 11:00:35.305771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.208 [2024-11-19 11:00:35.305783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.208 [2024-11-19 11:00:35.305791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.208 [2024-11-19 11:00:35.305797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.208 [2024-11-19 11:00:35.305811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.208 qpair failed and we were unable to recover it. 00:32:56.208 [2024-11-19 11:00:35.315802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.208 [2024-11-19 11:00:35.315903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.208 [2024-11-19 11:00:35.315919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.208 [2024-11-19 11:00:35.315927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.208 [2024-11-19 11:00:35.315934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.208 [2024-11-19 11:00:35.315949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.208 qpair failed and we were unable to recover it. 00:32:56.208 [2024-11-19 11:00:35.325795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.208 [2024-11-19 11:00:35.325847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.208 [2024-11-19 11:00:35.325860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.208 [2024-11-19 11:00:35.325868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.208 [2024-11-19 11:00:35.325874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.208 [2024-11-19 11:00:35.325889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.208 qpair failed and we were unable to recover it. 00:32:56.208 [2024-11-19 11:00:35.335865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.208 [2024-11-19 11:00:35.335924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.208 [2024-11-19 11:00:35.335948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.208 [2024-11-19 11:00:35.335957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.208 [2024-11-19 11:00:35.335965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.208 [2024-11-19 11:00:35.335985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.208 qpair failed and we were unable to recover it. 00:32:56.208 [2024-11-19 11:00:35.345877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.208 [2024-11-19 11:00:35.345934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.208 [2024-11-19 11:00:35.345957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.208 [2024-11-19 11:00:35.345967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.208 [2024-11-19 11:00:35.345974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.208 [2024-11-19 11:00:35.345994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.208 qpair failed and we were unable to recover it. 00:32:56.208 [2024-11-19 11:00:35.355905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.208 [2024-11-19 11:00:35.355962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.208 [2024-11-19 11:00:35.355987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.208 [2024-11-19 11:00:35.355996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.208 [2024-11-19 11:00:35.356008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.208 [2024-11-19 11:00:35.356028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.208 qpair failed and we were unable to recover it. 00:32:56.208 [2024-11-19 11:00:35.365811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.208 [2024-11-19 11:00:35.365874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.208 [2024-11-19 11:00:35.365890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.208 [2024-11-19 11:00:35.365897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.208 [2024-11-19 11:00:35.365905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.208 [2024-11-19 11:00:35.365920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.208 qpair failed and we were unable to recover it. 00:32:56.208 [2024-11-19 11:00:35.375958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.208 [2024-11-19 11:00:35.376015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.208 [2024-11-19 11:00:35.376028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.208 [2024-11-19 11:00:35.376036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.208 [2024-11-19 11:00:35.376042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.208 [2024-11-19 11:00:35.376057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.208 qpair failed and we were unable to recover it. 00:32:56.208 [2024-11-19 11:00:35.385977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.208 [2024-11-19 11:00:35.386065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.208 [2024-11-19 11:00:35.386078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.208 [2024-11-19 11:00:35.386086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.208 [2024-11-19 11:00:35.386093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.208 [2024-11-19 11:00:35.386108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.208 qpair failed and we were unable to recover it. 00:32:56.208 [2024-11-19 11:00:35.395999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.208 [2024-11-19 11:00:35.396058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.208 [2024-11-19 11:00:35.396071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.208 [2024-11-19 11:00:35.396079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.208 [2024-11-19 11:00:35.396086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.208 [2024-11-19 11:00:35.396100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.208 qpair failed and we were unable to recover it. 00:32:56.471 [2024-11-19 11:00:35.406040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.471 [2024-11-19 11:00:35.406091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.471 [2024-11-19 11:00:35.406104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.471 [2024-11-19 11:00:35.406112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.471 [2024-11-19 11:00:35.406119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.471 [2024-11-19 11:00:35.406134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.471 qpair failed and we were unable to recover it. 00:32:56.471 [2024-11-19 11:00:35.416066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.471 [2024-11-19 11:00:35.416122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.471 [2024-11-19 11:00:35.416135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.471 [2024-11-19 11:00:35.416142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.471 [2024-11-19 11:00:35.416149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.471 [2024-11-19 11:00:35.416167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.471 qpair failed and we were unable to recover it. 00:32:56.471 [2024-11-19 11:00:35.425978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.471 [2024-11-19 11:00:35.426036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.471 [2024-11-19 11:00:35.426049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.471 [2024-11-19 11:00:35.426056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.471 [2024-11-19 11:00:35.426063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.471 [2024-11-19 11:00:35.426077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.471 qpair failed and we were unable to recover it. 00:32:56.471 [2024-11-19 11:00:35.436101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.471 [2024-11-19 11:00:35.436172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.471 [2024-11-19 11:00:35.436185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.471 [2024-11-19 11:00:35.436192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.471 [2024-11-19 11:00:35.436199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.471 [2024-11-19 11:00:35.436214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.471 qpair failed and we were unable to recover it. 00:32:56.471 [2024-11-19 11:00:35.446282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.471 [2024-11-19 11:00:35.446339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.471 [2024-11-19 11:00:35.446355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.471 [2024-11-19 11:00:35.446363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.471 [2024-11-19 11:00:35.446370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.471 [2024-11-19 11:00:35.446384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.471 qpair failed and we were unable to recover it. 00:32:56.471 [2024-11-19 11:00:35.456239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.471 [2024-11-19 11:00:35.456297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.471 [2024-11-19 11:00:35.456309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.471 [2024-11-19 11:00:35.456317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.471 [2024-11-19 11:00:35.456323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.471 [2024-11-19 11:00:35.456338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.471 qpair failed and we were unable to recover it. 00:32:56.472 [2024-11-19 11:00:35.466141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.472 [2024-11-19 11:00:35.466246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.472 [2024-11-19 11:00:35.466260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.472 [2024-11-19 11:00:35.466267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.472 [2024-11-19 11:00:35.466273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.472 [2024-11-19 11:00:35.466288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.472 qpair failed and we were unable to recover it. 00:32:56.472 [2024-11-19 11:00:35.476256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.472 [2024-11-19 11:00:35.476305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.472 [2024-11-19 11:00:35.476318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.472 [2024-11-19 11:00:35.476325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.472 [2024-11-19 11:00:35.476332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.472 [2024-11-19 11:00:35.476346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.472 qpair failed and we were unable to recover it. 00:32:56.472 [2024-11-19 11:00:35.486272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.472 [2024-11-19 11:00:35.486328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.472 [2024-11-19 11:00:35.486343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.472 [2024-11-19 11:00:35.486351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.472 [2024-11-19 11:00:35.486363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.472 [2024-11-19 11:00:35.486379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.472 qpair failed and we were unable to recover it. 00:32:56.472 [2024-11-19 11:00:35.496307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.472 [2024-11-19 11:00:35.496392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.472 [2024-11-19 11:00:35.496406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.472 [2024-11-19 11:00:35.496413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.472 [2024-11-19 11:00:35.496421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.472 [2024-11-19 11:00:35.496436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.472 qpair failed and we were unable to recover it. 00:32:56.472 [2024-11-19 11:00:35.506247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.472 [2024-11-19 11:00:35.506298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.472 [2024-11-19 11:00:35.506311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.472 [2024-11-19 11:00:35.506319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.472 [2024-11-19 11:00:35.506325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.472 [2024-11-19 11:00:35.506340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.472 qpair failed and we were unable to recover it. 00:32:56.472 [2024-11-19 11:00:35.516392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.472 [2024-11-19 11:00:35.516447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.472 [2024-11-19 11:00:35.516459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.472 [2024-11-19 11:00:35.516467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.472 [2024-11-19 11:00:35.516474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.472 [2024-11-19 11:00:35.516487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.472 qpair failed and we were unable to recover it. 00:32:56.472 [2024-11-19 11:00:35.526351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.472 [2024-11-19 11:00:35.526404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.472 [2024-11-19 11:00:35.526416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.472 [2024-11-19 11:00:35.526424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.472 [2024-11-19 11:00:35.526431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.472 [2024-11-19 11:00:35.526445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.472 qpair failed and we were unable to recover it. 00:32:56.472 [2024-11-19 11:00:35.536415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.472 [2024-11-19 11:00:35.536470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.472 [2024-11-19 11:00:35.536483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.472 [2024-11-19 11:00:35.536490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.472 [2024-11-19 11:00:35.536496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.472 [2024-11-19 11:00:35.536511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.472 qpair failed and we were unable to recover it. 00:32:56.472 [2024-11-19 11:00:35.546406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.472 [2024-11-19 11:00:35.546454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.472 [2024-11-19 11:00:35.546467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.472 [2024-11-19 11:00:35.546474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.472 [2024-11-19 11:00:35.546481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.472 [2024-11-19 11:00:35.546495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.472 qpair failed and we were unable to recover it. 00:32:56.472 [2024-11-19 11:00:35.556456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.472 [2024-11-19 11:00:35.556554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.472 [2024-11-19 11:00:35.556568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.472 [2024-11-19 11:00:35.556575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.472 [2024-11-19 11:00:35.556582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.472 [2024-11-19 11:00:35.556597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.472 qpair failed and we were unable to recover it. 00:32:56.472 [2024-11-19 11:00:35.566366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.472 [2024-11-19 11:00:35.566420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.472 [2024-11-19 11:00:35.566433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.472 [2024-11-19 11:00:35.566441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.472 [2024-11-19 11:00:35.566447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.472 [2024-11-19 11:00:35.566461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.472 qpair failed and we were unable to recover it. 00:32:56.472 [2024-11-19 11:00:35.576526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.472 [2024-11-19 11:00:35.576585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.472 [2024-11-19 11:00:35.576599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.472 [2024-11-19 11:00:35.576606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.472 [2024-11-19 11:00:35.576613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.472 [2024-11-19 11:00:35.576627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.472 qpair failed and we were unable to recover it. 00:32:56.472 [2024-11-19 11:00:35.586503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.472 [2024-11-19 11:00:35.586557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.472 [2024-11-19 11:00:35.586570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.472 [2024-11-19 11:00:35.586577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.472 [2024-11-19 11:00:35.586584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.473 [2024-11-19 11:00:35.586599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.473 qpair failed and we were unable to recover it. 00:32:56.473 [2024-11-19 11:00:35.596451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.473 [2024-11-19 11:00:35.596530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.473 [2024-11-19 11:00:35.596545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.473 [2024-11-19 11:00:35.596552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.473 [2024-11-19 11:00:35.596559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.473 [2024-11-19 11:00:35.596574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.473 qpair failed and we were unable to recover it. 00:32:56.473 [2024-11-19 11:00:35.606595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.473 [2024-11-19 11:00:35.606647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.473 [2024-11-19 11:00:35.606660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.473 [2024-11-19 11:00:35.606668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.473 [2024-11-19 11:00:35.606674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.473 [2024-11-19 11:00:35.606689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.473 qpair failed and we were unable to recover it. 00:32:56.473 [2024-11-19 11:00:35.616627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.473 [2024-11-19 11:00:35.616682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.473 [2024-11-19 11:00:35.616695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.473 [2024-11-19 11:00:35.616705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.473 [2024-11-19 11:00:35.616712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.473 [2024-11-19 11:00:35.616726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.473 qpair failed and we were unable to recover it. 00:32:56.473 [2024-11-19 11:00:35.626630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.473 [2024-11-19 11:00:35.626678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.473 [2024-11-19 11:00:35.626691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.473 [2024-11-19 11:00:35.626698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.473 [2024-11-19 11:00:35.626705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.473 [2024-11-19 11:00:35.626719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.473 qpair failed and we were unable to recover it. 00:32:56.473 [2024-11-19 11:00:35.636677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.473 [2024-11-19 11:00:35.636728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.473 [2024-11-19 11:00:35.636741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.473 [2024-11-19 11:00:35.636748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.473 [2024-11-19 11:00:35.636755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.473 [2024-11-19 11:00:35.636769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.473 qpair failed and we were unable to recover it. 00:32:56.473 [2024-11-19 11:00:35.646704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.473 [2024-11-19 11:00:35.646754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.473 [2024-11-19 11:00:35.646766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.473 [2024-11-19 11:00:35.646773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.473 [2024-11-19 11:00:35.646780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.473 [2024-11-19 11:00:35.646794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.473 qpair failed and we were unable to recover it. 00:32:56.473 [2024-11-19 11:00:35.656721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.473 [2024-11-19 11:00:35.656781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.473 [2024-11-19 11:00:35.656794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.473 [2024-11-19 11:00:35.656801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.473 [2024-11-19 11:00:35.656808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.473 [2024-11-19 11:00:35.656825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.473 qpair failed and we were unable to recover it. 00:32:56.736 [2024-11-19 11:00:35.666729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.736 [2024-11-19 11:00:35.666776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.736 [2024-11-19 11:00:35.666789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.736 [2024-11-19 11:00:35.666796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.736 [2024-11-19 11:00:35.666803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.736 [2024-11-19 11:00:35.666817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.736 qpair failed and we were unable to recover it. 00:32:56.736 [2024-11-19 11:00:35.676768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.736 [2024-11-19 11:00:35.676824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.736 [2024-11-19 11:00:35.676836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.736 [2024-11-19 11:00:35.676844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.736 [2024-11-19 11:00:35.676850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.736 [2024-11-19 11:00:35.676865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.736 qpair failed and we were unable to recover it. 00:32:56.736 [2024-11-19 11:00:35.686829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.736 [2024-11-19 11:00:35.686883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.736 [2024-11-19 11:00:35.686896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.736 [2024-11-19 11:00:35.686904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.736 [2024-11-19 11:00:35.686911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.736 [2024-11-19 11:00:35.686925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.736 qpair failed and we were unable to recover it. 00:32:56.736 [2024-11-19 11:00:35.696847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.736 [2024-11-19 11:00:35.696950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.736 [2024-11-19 11:00:35.696975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.736 [2024-11-19 11:00:35.696984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.736 [2024-11-19 11:00:35.696992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.736 [2024-11-19 11:00:35.697011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.736 qpair failed and we were unable to recover it. 00:32:56.736 [2024-11-19 11:00:35.706857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.736 [2024-11-19 11:00:35.706918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.736 [2024-11-19 11:00:35.706936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.736 [2024-11-19 11:00:35.706944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.736 [2024-11-19 11:00:35.706951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.736 [2024-11-19 11:00:35.706967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.736 qpair failed and we were unable to recover it. 00:32:56.736 [2024-11-19 11:00:35.716873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.736 [2024-11-19 11:00:35.716950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.736 [2024-11-19 11:00:35.716974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.736 [2024-11-19 11:00:35.716984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.736 [2024-11-19 11:00:35.716991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.736 [2024-11-19 11:00:35.717011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.736 qpair failed and we were unable to recover it. 00:32:56.736 [2024-11-19 11:00:35.726931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.736 [2024-11-19 11:00:35.726984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.736 [2024-11-19 11:00:35.726999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.736 [2024-11-19 11:00:35.727007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.736 [2024-11-19 11:00:35.727014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.736 [2024-11-19 11:00:35.727029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.736 qpair failed and we were unable to recover it. 00:32:56.736 [2024-11-19 11:00:35.736967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.736 [2024-11-19 11:00:35.737021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.736 [2024-11-19 11:00:35.737034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.736 [2024-11-19 11:00:35.737041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.736 [2024-11-19 11:00:35.737048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.736 [2024-11-19 11:00:35.737063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.736 qpair failed and we were unable to recover it. 00:32:56.736 [2024-11-19 11:00:35.746927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.736 [2024-11-19 11:00:35.746978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.736 [2024-11-19 11:00:35.746991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.736 [2024-11-19 11:00:35.747003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.736 [2024-11-19 11:00:35.747010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.736 [2024-11-19 11:00:35.747024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.736 qpair failed and we were unable to recover it. 00:32:56.736 [2024-11-19 11:00:35.756896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.736 [2024-11-19 11:00:35.756956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.736 [2024-11-19 11:00:35.756971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.736 [2024-11-19 11:00:35.756979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.736 [2024-11-19 11:00:35.756986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.737 [2024-11-19 11:00:35.757001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.737 qpair failed and we were unable to recover it. 00:32:56.737 [2024-11-19 11:00:35.767046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.737 [2024-11-19 11:00:35.767139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.737 [2024-11-19 11:00:35.767153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.737 [2024-11-19 11:00:35.767164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.737 [2024-11-19 11:00:35.767172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.737 [2024-11-19 11:00:35.767187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.737 qpair failed and we were unable to recover it. 00:32:56.737 [2024-11-19 11:00:35.777061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.737 [2024-11-19 11:00:35.777114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.737 [2024-11-19 11:00:35.777127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.737 [2024-11-19 11:00:35.777134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.737 [2024-11-19 11:00:35.777141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.737 [2024-11-19 11:00:35.777155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.737 qpair failed and we were unable to recover it. 00:32:56.737 [2024-11-19 11:00:35.787068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.737 [2024-11-19 11:00:35.787119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.737 [2024-11-19 11:00:35.787131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.737 [2024-11-19 11:00:35.787139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.737 [2024-11-19 11:00:35.787146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.737 [2024-11-19 11:00:35.787172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.737 qpair failed and we were unable to recover it. 00:32:56.737 [2024-11-19 11:00:35.797103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.737 [2024-11-19 11:00:35.797191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.737 [2024-11-19 11:00:35.797205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.737 [2024-11-19 11:00:35.797213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.737 [2024-11-19 11:00:35.797220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.737 [2024-11-19 11:00:35.797234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.737 qpair failed and we were unable to recover it. 00:32:56.737 [2024-11-19 11:00:35.807124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.737 [2024-11-19 11:00:35.807180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.737 [2024-11-19 11:00:35.807193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.737 [2024-11-19 11:00:35.807201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.737 [2024-11-19 11:00:35.807208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.737 [2024-11-19 11:00:35.807222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.737 qpair failed and we were unable to recover it. 00:32:56.737 [2024-11-19 11:00:35.817173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.737 [2024-11-19 11:00:35.817264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.737 [2024-11-19 11:00:35.817278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.737 [2024-11-19 11:00:35.817286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.737 [2024-11-19 11:00:35.817293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.737 [2024-11-19 11:00:35.817307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.737 qpair failed and we were unable to recover it. 00:32:56.737 [2024-11-19 11:00:35.827169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.737 [2024-11-19 11:00:35.827218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.737 [2024-11-19 11:00:35.827230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.737 [2024-11-19 11:00:35.827238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.737 [2024-11-19 11:00:35.827244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.737 [2024-11-19 11:00:35.827259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.737 qpair failed and we were unable to recover it. 00:32:56.737 [2024-11-19 11:00:35.837220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.737 [2024-11-19 11:00:35.837272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.737 [2024-11-19 11:00:35.837284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.737 [2024-11-19 11:00:35.837292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.737 [2024-11-19 11:00:35.837299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.737 [2024-11-19 11:00:35.837313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.737 qpair failed and we were unable to recover it. 00:32:56.737 [2024-11-19 11:00:35.847245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.737 [2024-11-19 11:00:35.847302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.737 [2024-11-19 11:00:35.847315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.737 [2024-11-19 11:00:35.847323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.737 [2024-11-19 11:00:35.847329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.737 [2024-11-19 11:00:35.847343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.737 qpair failed and we were unable to recover it. 00:32:56.737 [2024-11-19 11:00:35.857303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.737 [2024-11-19 11:00:35.857358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.737 [2024-11-19 11:00:35.857370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.737 [2024-11-19 11:00:35.857378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.737 [2024-11-19 11:00:35.857384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.737 [2024-11-19 11:00:35.857399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.737 qpair failed and we were unable to recover it. 00:32:56.737 [2024-11-19 11:00:35.867266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.737 [2024-11-19 11:00:35.867347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.737 [2024-11-19 11:00:35.867359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.737 [2024-11-19 11:00:35.867367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.737 [2024-11-19 11:00:35.867374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.737 [2024-11-19 11:00:35.867388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.737 qpair failed and we were unable to recover it. 00:32:56.737 [2024-11-19 11:00:35.877299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.737 [2024-11-19 11:00:35.877352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.737 [2024-11-19 11:00:35.877368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.737 [2024-11-19 11:00:35.877375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.737 [2024-11-19 11:00:35.877382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.737 [2024-11-19 11:00:35.877397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.737 qpair failed and we were unable to recover it. 00:32:56.737 [2024-11-19 11:00:35.887233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.737 [2024-11-19 11:00:35.887324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.737 [2024-11-19 11:00:35.887337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.737 [2024-11-19 11:00:35.887345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.738 [2024-11-19 11:00:35.887351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.738 [2024-11-19 11:00:35.887365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.738 qpair failed and we were unable to recover it. 00:32:56.738 [2024-11-19 11:00:35.897426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.738 [2024-11-19 11:00:35.897484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.738 [2024-11-19 11:00:35.897497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.738 [2024-11-19 11:00:35.897504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.738 [2024-11-19 11:00:35.897511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.738 [2024-11-19 11:00:35.897525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.738 qpair failed and we were unable to recover it. 00:32:56.738 [2024-11-19 11:00:35.907390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.738 [2024-11-19 11:00:35.907442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.738 [2024-11-19 11:00:35.907455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.738 [2024-11-19 11:00:35.907462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.738 [2024-11-19 11:00:35.907468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.738 [2024-11-19 11:00:35.907483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.738 qpair failed and we were unable to recover it. 00:32:56.738 [2024-11-19 11:00:35.917430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.738 [2024-11-19 11:00:35.917488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.738 [2024-11-19 11:00:35.917501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.738 [2024-11-19 11:00:35.917508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.738 [2024-11-19 11:00:35.917518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.738 [2024-11-19 11:00:35.917533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.738 qpair failed and we were unable to recover it. 00:32:56.738 [2024-11-19 11:00:35.927473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.738 [2024-11-19 11:00:35.927523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.738 [2024-11-19 11:00:35.927536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.738 [2024-11-19 11:00:35.927543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.738 [2024-11-19 11:00:35.927549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:56.738 [2024-11-19 11:00:35.927564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.738 qpair failed and we were unable to recover it. 00:32:57.000 [2024-11-19 11:00:35.937509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.000 [2024-11-19 11:00:35.937563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.000 [2024-11-19 11:00:35.937576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.000 [2024-11-19 11:00:35.937584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.000 [2024-11-19 11:00:35.937590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.000 [2024-11-19 11:00:35.937605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.000 qpair failed and we were unable to recover it. 00:32:57.001 [2024-11-19 11:00:35.947497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.001 [2024-11-19 11:00:35.947559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.001 [2024-11-19 11:00:35.947572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.001 [2024-11-19 11:00:35.947579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.001 [2024-11-19 11:00:35.947585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.001 [2024-11-19 11:00:35.947600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.001 qpair failed and we were unable to recover it. 00:32:57.001 [2024-11-19 11:00:35.957552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.001 [2024-11-19 11:00:35.957601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.001 [2024-11-19 11:00:35.957613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.001 [2024-11-19 11:00:35.957621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.001 [2024-11-19 11:00:35.957628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.001 [2024-11-19 11:00:35.957642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.001 qpair failed and we were unable to recover it. 00:32:57.001 [2024-11-19 11:00:35.967577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.001 [2024-11-19 11:00:35.967627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.001 [2024-11-19 11:00:35.967640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.001 [2024-11-19 11:00:35.967647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.001 [2024-11-19 11:00:35.967654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.001 [2024-11-19 11:00:35.967668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.001 qpair failed and we were unable to recover it. 00:32:57.001 [2024-11-19 11:00:35.977602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.001 [2024-11-19 11:00:35.977657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.001 [2024-11-19 11:00:35.977670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.001 [2024-11-19 11:00:35.977677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.001 [2024-11-19 11:00:35.977684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.001 [2024-11-19 11:00:35.977698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.001 qpair failed and we were unable to recover it. 00:32:57.001 [2024-11-19 11:00:35.987605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.001 [2024-11-19 11:00:35.987671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.001 [2024-11-19 11:00:35.987683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.001 [2024-11-19 11:00:35.987691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.001 [2024-11-19 11:00:35.987697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.001 [2024-11-19 11:00:35.987712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.001 qpair failed and we were unable to recover it. 00:32:57.001 [2024-11-19 11:00:35.997662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.001 [2024-11-19 11:00:35.997747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.001 [2024-11-19 11:00:35.997760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.001 [2024-11-19 11:00:35.997769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.001 [2024-11-19 11:00:35.997775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.001 [2024-11-19 11:00:35.997790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.001 qpair failed and we were unable to recover it. 00:32:57.001 [2024-11-19 11:00:36.007712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.001 [2024-11-19 11:00:36.007764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.001 [2024-11-19 11:00:36.007780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.001 [2024-11-19 11:00:36.007787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.001 [2024-11-19 11:00:36.007794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.001 [2024-11-19 11:00:36.007809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.001 qpair failed and we were unable to recover it. 00:32:57.001 [2024-11-19 11:00:36.017734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.001 [2024-11-19 11:00:36.017791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.001 [2024-11-19 11:00:36.017804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.001 [2024-11-19 11:00:36.017811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.001 [2024-11-19 11:00:36.017818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.001 [2024-11-19 11:00:36.017832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.001 qpair failed and we were unable to recover it. 00:32:57.001 [2024-11-19 11:00:36.027721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.001 [2024-11-19 11:00:36.027809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.001 [2024-11-19 11:00:36.027821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.001 [2024-11-19 11:00:36.027829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.001 [2024-11-19 11:00:36.027836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.001 [2024-11-19 11:00:36.027850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.001 qpair failed and we were unable to recover it. 00:32:57.001 [2024-11-19 11:00:36.037783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.001 [2024-11-19 11:00:36.037834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.001 [2024-11-19 11:00:36.037847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.001 [2024-11-19 11:00:36.037854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.001 [2024-11-19 11:00:36.037861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.001 [2024-11-19 11:00:36.037875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.001 qpair failed and we were unable to recover it. 00:32:57.001 [2024-11-19 11:00:36.047824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.001 [2024-11-19 11:00:36.047875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.001 [2024-11-19 11:00:36.047887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.001 [2024-11-19 11:00:36.047895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.001 [2024-11-19 11:00:36.047905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.001 [2024-11-19 11:00:36.047920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.001 qpair failed and we were unable to recover it. 00:32:57.001 [2024-11-19 11:00:36.057714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.001 [2024-11-19 11:00:36.057786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.001 [2024-11-19 11:00:36.057798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.001 [2024-11-19 11:00:36.057806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.001 [2024-11-19 11:00:36.057813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.001 [2024-11-19 11:00:36.057828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.001 qpair failed and we were unable to recover it. 00:32:57.001 [2024-11-19 11:00:36.067805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.001 [2024-11-19 11:00:36.067853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.001 [2024-11-19 11:00:36.067865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.001 [2024-11-19 11:00:36.067873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.001 [2024-11-19 11:00:36.067880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.002 [2024-11-19 11:00:36.067894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.002 qpair failed and we were unable to recover it. 00:32:57.002 [2024-11-19 11:00:36.077787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.002 [2024-11-19 11:00:36.077841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.002 [2024-11-19 11:00:36.077865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.002 [2024-11-19 11:00:36.077874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.002 [2024-11-19 11:00:36.077881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.002 [2024-11-19 11:00:36.077901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.002 qpair failed and we were unable to recover it. 00:32:57.002 [2024-11-19 11:00:36.087908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.002 [2024-11-19 11:00:36.088008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.002 [2024-11-19 11:00:36.088032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.002 [2024-11-19 11:00:36.088041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.002 [2024-11-19 11:00:36.088050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.002 [2024-11-19 11:00:36.088069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.002 qpair failed and we were unable to recover it. 00:32:57.002 [2024-11-19 11:00:36.097953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.002 [2024-11-19 11:00:36.098011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.002 [2024-11-19 11:00:36.098027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.002 [2024-11-19 11:00:36.098035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.002 [2024-11-19 11:00:36.098042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.002 [2024-11-19 11:00:36.098058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.002 qpair failed and we were unable to recover it. 00:32:57.002 [2024-11-19 11:00:36.107826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.002 [2024-11-19 11:00:36.107888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.002 [2024-11-19 11:00:36.107902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.002 [2024-11-19 11:00:36.107910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.002 [2024-11-19 11:00:36.107916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.002 [2024-11-19 11:00:36.107931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.002 qpair failed and we were unable to recover it. 00:32:57.002 [2024-11-19 11:00:36.117996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.002 [2024-11-19 11:00:36.118049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.002 [2024-11-19 11:00:36.118062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.002 [2024-11-19 11:00:36.118070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.002 [2024-11-19 11:00:36.118077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.002 [2024-11-19 11:00:36.118091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.002 qpair failed and we were unable to recover it. 00:32:57.002 [2024-11-19 11:00:36.128032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.002 [2024-11-19 11:00:36.128081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.002 [2024-11-19 11:00:36.128093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.002 [2024-11-19 11:00:36.128101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.002 [2024-11-19 11:00:36.128108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.002 [2024-11-19 11:00:36.128122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.002 qpair failed and we were unable to recover it. 00:32:57.002 [2024-11-19 11:00:36.138058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.002 [2024-11-19 11:00:36.138117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.002 [2024-11-19 11:00:36.138131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.002 [2024-11-19 11:00:36.138138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.002 [2024-11-19 11:00:36.138145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.002 [2024-11-19 11:00:36.138163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.002 qpair failed and we were unable to recover it. 00:32:57.002 [2024-11-19 11:00:36.148019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.002 [2024-11-19 11:00:36.148068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.002 [2024-11-19 11:00:36.148081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.002 [2024-11-19 11:00:36.148088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.002 [2024-11-19 11:00:36.148095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.002 [2024-11-19 11:00:36.148109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.002 qpair failed and we were unable to recover it. 00:32:57.002 [2024-11-19 11:00:36.158100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.002 [2024-11-19 11:00:36.158161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.002 [2024-11-19 11:00:36.158174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.002 [2024-11-19 11:00:36.158182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.002 [2024-11-19 11:00:36.158188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.002 [2024-11-19 11:00:36.158203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.002 qpair failed and we were unable to recover it. 00:32:57.002 [2024-11-19 11:00:36.168005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.002 [2024-11-19 11:00:36.168056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.002 [2024-11-19 11:00:36.168069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.002 [2024-11-19 11:00:36.168076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.002 [2024-11-19 11:00:36.168083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.002 [2024-11-19 11:00:36.168097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.002 qpair failed and we were unable to recover it. 00:32:57.002 [2024-11-19 11:00:36.178167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.002 [2024-11-19 11:00:36.178228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.002 [2024-11-19 11:00:36.178243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.002 [2024-11-19 11:00:36.178254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.002 [2024-11-19 11:00:36.178264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.002 [2024-11-19 11:00:36.178280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.002 qpair failed and we were unable to recover it. 00:32:57.003 [2024-11-19 11:00:36.188154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.003 [2024-11-19 11:00:36.188215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.003 [2024-11-19 11:00:36.188229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.003 [2024-11-19 11:00:36.188236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.003 [2024-11-19 11:00:36.188243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.003 [2024-11-19 11:00:36.188258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.003 qpair failed and we were unable to recover it. 00:32:57.265 [2024-11-19 11:00:36.198203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.265 [2024-11-19 11:00:36.198258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.265 [2024-11-19 11:00:36.198271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.265 [2024-11-19 11:00:36.198279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.265 [2024-11-19 11:00:36.198286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.265 [2024-11-19 11:00:36.198300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.265 qpair failed and we were unable to recover it. 00:32:57.265 [2024-11-19 11:00:36.208241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.265 [2024-11-19 11:00:36.208297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.265 [2024-11-19 11:00:36.208310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.265 [2024-11-19 11:00:36.208318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.265 [2024-11-19 11:00:36.208325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.265 [2024-11-19 11:00:36.208339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.265 qpair failed and we were unable to recover it. 00:32:57.265 [2024-11-19 11:00:36.218264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.265 [2024-11-19 11:00:36.218319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.265 [2024-11-19 11:00:36.218333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.265 [2024-11-19 11:00:36.218341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.265 [2024-11-19 11:00:36.218348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.265 [2024-11-19 11:00:36.218367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.265 qpair failed and we were unable to recover it. 00:32:57.265 [2024-11-19 11:00:36.228157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.265 [2024-11-19 11:00:36.228224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.265 [2024-11-19 11:00:36.228237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.265 [2024-11-19 11:00:36.228245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.265 [2024-11-19 11:00:36.228251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.265 [2024-11-19 11:00:36.228266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.265 qpair failed and we were unable to recover it. 00:32:57.265 [2024-11-19 11:00:36.238332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.265 [2024-11-19 11:00:36.238386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.265 [2024-11-19 11:00:36.238399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.265 [2024-11-19 11:00:36.238406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.265 [2024-11-19 11:00:36.238413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.265 [2024-11-19 11:00:36.238427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.265 qpair failed and we were unable to recover it. 00:32:57.265 [2024-11-19 11:00:36.248365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.265 [2024-11-19 11:00:36.248417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.265 [2024-11-19 11:00:36.248431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.265 [2024-11-19 11:00:36.248438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.265 [2024-11-19 11:00:36.248445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.265 [2024-11-19 11:00:36.248459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.265 qpair failed and we were unable to recover it. 00:32:57.265 [2024-11-19 11:00:36.258409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.265 [2024-11-19 11:00:36.258468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.265 [2024-11-19 11:00:36.258481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.265 [2024-11-19 11:00:36.258489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.265 [2024-11-19 11:00:36.258495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.265 [2024-11-19 11:00:36.258510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.265 qpair failed and we were unable to recover it. 00:32:57.265 [2024-11-19 11:00:36.268272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.265 [2024-11-19 11:00:36.268322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.265 [2024-11-19 11:00:36.268337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.265 [2024-11-19 11:00:36.268345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.265 [2024-11-19 11:00:36.268352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.265 [2024-11-19 11:00:36.268366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.265 qpair failed and we were unable to recover it. 00:32:57.265 [2024-11-19 11:00:36.278439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.265 [2024-11-19 11:00:36.278489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.265 [2024-11-19 11:00:36.278502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.265 [2024-11-19 11:00:36.278509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.265 [2024-11-19 11:00:36.278516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.265 [2024-11-19 11:00:36.278530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.265 qpair failed and we were unable to recover it. 00:32:57.265 [2024-11-19 11:00:36.288487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.266 [2024-11-19 11:00:36.288592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.266 [2024-11-19 11:00:36.288605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.266 [2024-11-19 11:00:36.288612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.266 [2024-11-19 11:00:36.288618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.266 [2024-11-19 11:00:36.288633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.266 qpair failed and we were unable to recover it. 00:32:57.266 [2024-11-19 11:00:36.298511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.266 [2024-11-19 11:00:36.298571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.266 [2024-11-19 11:00:36.298584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.266 [2024-11-19 11:00:36.298592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.266 [2024-11-19 11:00:36.298598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.266 [2024-11-19 11:00:36.298612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.266 qpair failed and we were unable to recover it. 00:32:57.266 [2024-11-19 11:00:36.308509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.266 [2024-11-19 11:00:36.308561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.266 [2024-11-19 11:00:36.308580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.266 [2024-11-19 11:00:36.308588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.266 [2024-11-19 11:00:36.308594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.266 [2024-11-19 11:00:36.308609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.266 qpair failed and we were unable to recover it. 00:32:57.266 [2024-11-19 11:00:36.318536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.266 [2024-11-19 11:00:36.318599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.266 [2024-11-19 11:00:36.318612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.266 [2024-11-19 11:00:36.318619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.266 [2024-11-19 11:00:36.318626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.266 [2024-11-19 11:00:36.318640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.266 qpair failed and we were unable to recover it. 00:32:57.266 [2024-11-19 11:00:36.328589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.266 [2024-11-19 11:00:36.328644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.266 [2024-11-19 11:00:36.328657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.266 [2024-11-19 11:00:36.328664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.266 [2024-11-19 11:00:36.328671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.266 [2024-11-19 11:00:36.328685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.266 qpair failed and we were unable to recover it. 00:32:57.266 [2024-11-19 11:00:36.338608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.266 [2024-11-19 11:00:36.338667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.266 [2024-11-19 11:00:36.338680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.266 [2024-11-19 11:00:36.338687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.266 [2024-11-19 11:00:36.338694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.266 [2024-11-19 11:00:36.338708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.266 qpair failed and we were unable to recover it. 00:32:57.266 [2024-11-19 11:00:36.348605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.266 [2024-11-19 11:00:36.348654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.266 [2024-11-19 11:00:36.348667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.266 [2024-11-19 11:00:36.348674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.266 [2024-11-19 11:00:36.348681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.266 [2024-11-19 11:00:36.348699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.266 qpair failed and we were unable to recover it. 00:32:57.266 [2024-11-19 11:00:36.358681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.266 [2024-11-19 11:00:36.358731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.266 [2024-11-19 11:00:36.358744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.266 [2024-11-19 11:00:36.358751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.266 [2024-11-19 11:00:36.358758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.266 [2024-11-19 11:00:36.358773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.266 qpair failed and we were unable to recover it. 00:32:57.266 [2024-11-19 11:00:36.368653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.266 [2024-11-19 11:00:36.368705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.266 [2024-11-19 11:00:36.368718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.266 [2024-11-19 11:00:36.368725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.266 [2024-11-19 11:00:36.368732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.266 [2024-11-19 11:00:36.368746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.266 qpair failed and we were unable to recover it. 00:32:57.266 [2024-11-19 11:00:36.378723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.266 [2024-11-19 11:00:36.378782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.266 [2024-11-19 11:00:36.378795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.266 [2024-11-19 11:00:36.378802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.266 [2024-11-19 11:00:36.378808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.266 [2024-11-19 11:00:36.378823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.266 qpair failed and we were unable to recover it. 00:32:57.266 [2024-11-19 11:00:36.388722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.266 [2024-11-19 11:00:36.388771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.266 [2024-11-19 11:00:36.388784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.266 [2024-11-19 11:00:36.388791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.267 [2024-11-19 11:00:36.388798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.267 [2024-11-19 11:00:36.388812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.267 qpair failed and we were unable to recover it. 00:32:57.267 [2024-11-19 11:00:36.398811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.267 [2024-11-19 11:00:36.398907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.267 [2024-11-19 11:00:36.398920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.267 [2024-11-19 11:00:36.398927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.267 [2024-11-19 11:00:36.398935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.267 [2024-11-19 11:00:36.398949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.267 qpair failed and we were unable to recover it. 00:32:57.267 [2024-11-19 11:00:36.408692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.267 [2024-11-19 11:00:36.408742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.267 [2024-11-19 11:00:36.408754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.267 [2024-11-19 11:00:36.408762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.267 [2024-11-19 11:00:36.408769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.267 [2024-11-19 11:00:36.408783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.267 qpair failed and we were unable to recover it. 00:32:57.267 [2024-11-19 11:00:36.418882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.267 [2024-11-19 11:00:36.418939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.267 [2024-11-19 11:00:36.418951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.267 [2024-11-19 11:00:36.418959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.267 [2024-11-19 11:00:36.418966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.267 [2024-11-19 11:00:36.418980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.267 qpair failed and we were unable to recover it. 00:32:57.267 [2024-11-19 11:00:36.428847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.267 [2024-11-19 11:00:36.428900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.267 [2024-11-19 11:00:36.428924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.267 [2024-11-19 11:00:36.428933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.267 [2024-11-19 11:00:36.428940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.267 [2024-11-19 11:00:36.428960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.267 qpair failed and we were unable to recover it. 00:32:57.267 [2024-11-19 11:00:36.438874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.267 [2024-11-19 11:00:36.438931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.267 [2024-11-19 11:00:36.438960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.267 [2024-11-19 11:00:36.438970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.267 [2024-11-19 11:00:36.438977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.267 [2024-11-19 11:00:36.438998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.267 qpair failed and we were unable to recover it. 00:32:57.267 [2024-11-19 11:00:36.448915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.267 [2024-11-19 11:00:36.448971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.267 [2024-11-19 11:00:36.448996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.267 [2024-11-19 11:00:36.449005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.267 [2024-11-19 11:00:36.449013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.267 [2024-11-19 11:00:36.449033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.267 qpair failed and we were unable to recover it. 00:32:57.529 [2024-11-19 11:00:36.458952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.529 [2024-11-19 11:00:36.459009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.530 [2024-11-19 11:00:36.459024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.530 [2024-11-19 11:00:36.459033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.530 [2024-11-19 11:00:36.459040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.530 [2024-11-19 11:00:36.459057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.530 qpair failed and we were unable to recover it. 00:32:57.530 [2024-11-19 11:00:36.468953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.530 [2024-11-19 11:00:36.469003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.530 [2024-11-19 11:00:36.469017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.530 [2024-11-19 11:00:36.469024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.530 [2024-11-19 11:00:36.469031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.530 [2024-11-19 11:00:36.469046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.530 qpair failed and we were unable to recover it. 00:32:57.530 [2024-11-19 11:00:36.478994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.530 [2024-11-19 11:00:36.479052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.530 [2024-11-19 11:00:36.479065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.530 [2024-11-19 11:00:36.479073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.530 [2024-11-19 11:00:36.479087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.530 [2024-11-19 11:00:36.479105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.530 qpair failed and we were unable to recover it. 00:32:57.530 [2024-11-19 11:00:36.489035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.530 [2024-11-19 11:00:36.489095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.530 [2024-11-19 11:00:36.489108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.530 [2024-11-19 11:00:36.489116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.530 [2024-11-19 11:00:36.489122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.530 [2024-11-19 11:00:36.489136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.530 qpair failed and we were unable to recover it. 00:32:57.530 [2024-11-19 11:00:36.499083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.530 [2024-11-19 11:00:36.499141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.530 [2024-11-19 11:00:36.499154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.530 [2024-11-19 11:00:36.499169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.530 [2024-11-19 11:00:36.499176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.530 [2024-11-19 11:00:36.499190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.530 qpair failed and we were unable to recover it. 00:32:57.530 [2024-11-19 11:00:36.509066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.530 [2024-11-19 11:00:36.509115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.530 [2024-11-19 11:00:36.509128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.530 [2024-11-19 11:00:36.509136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.530 [2024-11-19 11:00:36.509142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.530 [2024-11-19 11:00:36.509157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.530 qpair failed and we were unable to recover it. 00:32:57.530 [2024-11-19 11:00:36.519084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.530 [2024-11-19 11:00:36.519137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.530 [2024-11-19 11:00:36.519150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.530 [2024-11-19 11:00:36.519157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.530 [2024-11-19 11:00:36.519167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.530 [2024-11-19 11:00:36.519182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.530 qpair failed and we were unable to recover it. 00:32:57.530 [2024-11-19 11:00:36.529147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.530 [2024-11-19 11:00:36.529203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.530 [2024-11-19 11:00:36.529217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.530 [2024-11-19 11:00:36.529224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.530 [2024-11-19 11:00:36.529231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.530 [2024-11-19 11:00:36.529245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.530 qpair failed and we were unable to recover it. 00:32:57.530 [2024-11-19 11:00:36.539196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.530 [2024-11-19 11:00:36.539265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.530 [2024-11-19 11:00:36.539279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.530 [2024-11-19 11:00:36.539286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.530 [2024-11-19 11:00:36.539293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.530 [2024-11-19 11:00:36.539307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.530 qpair failed and we were unable to recover it. 00:32:57.530 [2024-11-19 11:00:36.549197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.530 [2024-11-19 11:00:36.549246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.530 [2024-11-19 11:00:36.549259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.530 [2024-11-19 11:00:36.549266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.530 [2024-11-19 11:00:36.549273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.530 [2024-11-19 11:00:36.549288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.530 qpair failed and we were unable to recover it. 00:32:57.530 [2024-11-19 11:00:36.559220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.530 [2024-11-19 11:00:36.559280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.530 [2024-11-19 11:00:36.559292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.530 [2024-11-19 11:00:36.559300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.530 [2024-11-19 11:00:36.559306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.530 [2024-11-19 11:00:36.559321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.530 qpair failed and we were unable to recover it. 00:32:57.530 [2024-11-19 11:00:36.569262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.530 [2024-11-19 11:00:36.569340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.531 [2024-11-19 11:00:36.569357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.531 [2024-11-19 11:00:36.569364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.531 [2024-11-19 11:00:36.569372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.531 [2024-11-19 11:00:36.569387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.531 qpair failed and we were unable to recover it. 00:32:57.531 [2024-11-19 11:00:36.579307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.531 [2024-11-19 11:00:36.579361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.531 [2024-11-19 11:00:36.579374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.531 [2024-11-19 11:00:36.579381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.531 [2024-11-19 11:00:36.579388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.531 [2024-11-19 11:00:36.579402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.531 qpair failed and we were unable to recover it. 00:32:57.531 [2024-11-19 11:00:36.589294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.531 [2024-11-19 11:00:36.589344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.531 [2024-11-19 11:00:36.589358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.531 [2024-11-19 11:00:36.589365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.531 [2024-11-19 11:00:36.589372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.531 [2024-11-19 11:00:36.589386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.531 qpair failed and we were unable to recover it. 00:32:57.531 [2024-11-19 11:00:36.599343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.531 [2024-11-19 11:00:36.599394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.531 [2024-11-19 11:00:36.599406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.531 [2024-11-19 11:00:36.599414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.531 [2024-11-19 11:00:36.599421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.531 [2024-11-19 11:00:36.599435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.531 qpair failed and we were unable to recover it. 00:32:57.531 [2024-11-19 11:00:36.609365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.531 [2024-11-19 11:00:36.609430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.531 [2024-11-19 11:00:36.609443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.531 [2024-11-19 11:00:36.609453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.531 [2024-11-19 11:00:36.609460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.531 [2024-11-19 11:00:36.609476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.531 qpair failed and we were unable to recover it. 00:32:57.531 [2024-11-19 11:00:36.619384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.531 [2024-11-19 11:00:36.619437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.531 [2024-11-19 11:00:36.619450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.531 [2024-11-19 11:00:36.619458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.531 [2024-11-19 11:00:36.619464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.531 [2024-11-19 11:00:36.619479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.531 qpair failed and we were unable to recover it. 00:32:57.531 [2024-11-19 11:00:36.629403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.531 [2024-11-19 11:00:36.629452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.531 [2024-11-19 11:00:36.629465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.531 [2024-11-19 11:00:36.629472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.531 [2024-11-19 11:00:36.629479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.531 [2024-11-19 11:00:36.629493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.531 qpair failed and we were unable to recover it. 00:32:57.531 [2024-11-19 11:00:36.639450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.531 [2024-11-19 11:00:36.639509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.531 [2024-11-19 11:00:36.639522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.531 [2024-11-19 11:00:36.639529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.531 [2024-11-19 11:00:36.639537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.531 [2024-11-19 11:00:36.639552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.531 qpair failed and we were unable to recover it. 00:32:57.531 [2024-11-19 11:00:36.649484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.531 [2024-11-19 11:00:36.649578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.531 [2024-11-19 11:00:36.649591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.531 [2024-11-19 11:00:36.649598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.531 [2024-11-19 11:00:36.649605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.531 [2024-11-19 11:00:36.649619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.531 qpair failed and we were unable to recover it. 00:32:57.531 [2024-11-19 11:00:36.659491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.531 [2024-11-19 11:00:36.659549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.531 [2024-11-19 11:00:36.659562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.531 [2024-11-19 11:00:36.659570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.531 [2024-11-19 11:00:36.659576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.531 [2024-11-19 11:00:36.659591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.531 qpair failed and we were unable to recover it. 00:32:57.531 [2024-11-19 11:00:36.669503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.531 [2024-11-19 11:00:36.669557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.531 [2024-11-19 11:00:36.669569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.531 [2024-11-19 11:00:36.669577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.531 [2024-11-19 11:00:36.669584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.531 [2024-11-19 11:00:36.669598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.531 qpair failed and we were unable to recover it. 00:32:57.531 [2024-11-19 11:00:36.679559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.531 [2024-11-19 11:00:36.679608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.531 [2024-11-19 11:00:36.679621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.531 [2024-11-19 11:00:36.679628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.531 [2024-11-19 11:00:36.679635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.531 [2024-11-19 11:00:36.679649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.531 qpair failed and we were unable to recover it. 00:32:57.532 [2024-11-19 11:00:36.689459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.532 [2024-11-19 11:00:36.689519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.532 [2024-11-19 11:00:36.689532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.532 [2024-11-19 11:00:36.689539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.532 [2024-11-19 11:00:36.689546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.532 [2024-11-19 11:00:36.689560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.532 qpair failed and we were unable to recover it. 00:32:57.532 [2024-11-19 11:00:36.699621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.532 [2024-11-19 11:00:36.699677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.532 [2024-11-19 11:00:36.699690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.532 [2024-11-19 11:00:36.699697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.532 [2024-11-19 11:00:36.699703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.532 [2024-11-19 11:00:36.699718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.532 qpair failed and we were unable to recover it. 00:32:57.532 [2024-11-19 11:00:36.709624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.532 [2024-11-19 11:00:36.709674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.532 [2024-11-19 11:00:36.709686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.532 [2024-11-19 11:00:36.709694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.532 [2024-11-19 11:00:36.709701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.532 [2024-11-19 11:00:36.709715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.532 qpair failed and we were unable to recover it. 00:32:57.532 [2024-11-19 11:00:36.719660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.532 [2024-11-19 11:00:36.719721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.532 [2024-11-19 11:00:36.719734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.532 [2024-11-19 11:00:36.719741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.532 [2024-11-19 11:00:36.719748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.532 [2024-11-19 11:00:36.719762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.532 qpair failed and we were unable to recover it. 00:32:57.794 [2024-11-19 11:00:36.729664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.794 [2024-11-19 11:00:36.729719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.794 [2024-11-19 11:00:36.729732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.794 [2024-11-19 11:00:36.729739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.794 [2024-11-19 11:00:36.729746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.794 [2024-11-19 11:00:36.729760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.794 qpair failed and we were unable to recover it. 00:32:57.794 [2024-11-19 11:00:36.739685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.794 [2024-11-19 11:00:36.739741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.794 [2024-11-19 11:00:36.739754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.794 [2024-11-19 11:00:36.739765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.794 [2024-11-19 11:00:36.739771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.794 [2024-11-19 11:00:36.739785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.794 qpair failed and we were unable to recover it. 00:32:57.794 [2024-11-19 11:00:36.749695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.794 [2024-11-19 11:00:36.749746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.794 [2024-11-19 11:00:36.749759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.794 [2024-11-19 11:00:36.749766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.794 [2024-11-19 11:00:36.749773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.794 [2024-11-19 11:00:36.749787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.794 qpair failed and we were unable to recover it. 00:32:57.794 [2024-11-19 11:00:36.759770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.794 [2024-11-19 11:00:36.759829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.794 [2024-11-19 11:00:36.759842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.794 [2024-11-19 11:00:36.759849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.794 [2024-11-19 11:00:36.759855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.794 [2024-11-19 11:00:36.759869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.794 qpair failed and we were unable to recover it. 00:32:57.794 [2024-11-19 11:00:36.769810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.794 [2024-11-19 11:00:36.769867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.794 [2024-11-19 11:00:36.769880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.794 [2024-11-19 11:00:36.769887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.794 [2024-11-19 11:00:36.769894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.794 [2024-11-19 11:00:36.769908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.794 qpair failed and we were unable to recover it. 00:32:57.794 [2024-11-19 11:00:36.779864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.794 [2024-11-19 11:00:36.779919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.794 [2024-11-19 11:00:36.779932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.794 [2024-11-19 11:00:36.779939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.794 [2024-11-19 11:00:36.779945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.794 [2024-11-19 11:00:36.779963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.794 qpair failed and we were unable to recover it. 00:32:57.794 [2024-11-19 11:00:36.789826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.794 [2024-11-19 11:00:36.789877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.794 [2024-11-19 11:00:36.789890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.794 [2024-11-19 11:00:36.789897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.794 [2024-11-19 11:00:36.789904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.794 [2024-11-19 11:00:36.789919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.794 qpair failed and we were unable to recover it. 00:32:57.794 [2024-11-19 11:00:36.799872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.794 [2024-11-19 11:00:36.799927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.794 [2024-11-19 11:00:36.799940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.794 [2024-11-19 11:00:36.799947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.794 [2024-11-19 11:00:36.799954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.795 [2024-11-19 11:00:36.799968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.795 qpair failed and we were unable to recover it. 00:32:57.795 [2024-11-19 11:00:36.809913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.795 [2024-11-19 11:00:36.809965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.795 [2024-11-19 11:00:36.809978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.795 [2024-11-19 11:00:36.809985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.795 [2024-11-19 11:00:36.809992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.795 [2024-11-19 11:00:36.810006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.795 qpair failed and we were unable to recover it. 00:32:57.795 [2024-11-19 11:00:36.819976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.795 [2024-11-19 11:00:36.820066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.795 [2024-11-19 11:00:36.820079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.795 [2024-11-19 11:00:36.820086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.795 [2024-11-19 11:00:36.820093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.795 [2024-11-19 11:00:36.820107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.795 qpair failed and we were unable to recover it. 00:32:57.795 [2024-11-19 11:00:36.829946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.795 [2024-11-19 11:00:36.830000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.795 [2024-11-19 11:00:36.830013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.795 [2024-11-19 11:00:36.830020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.795 [2024-11-19 11:00:36.830026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.795 [2024-11-19 11:00:36.830040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.795 qpair failed and we were unable to recover it. 00:32:57.795 [2024-11-19 11:00:36.840004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.795 [2024-11-19 11:00:36.840062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.795 [2024-11-19 11:00:36.840075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.795 [2024-11-19 11:00:36.840082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.795 [2024-11-19 11:00:36.840089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.795 [2024-11-19 11:00:36.840103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.795 qpair failed and we were unable to recover it. 00:32:57.795 [2024-11-19 11:00:36.850056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.795 [2024-11-19 11:00:36.850134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.795 [2024-11-19 11:00:36.850147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.795 [2024-11-19 11:00:36.850154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.795 [2024-11-19 11:00:36.850166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.795 [2024-11-19 11:00:36.850180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.795 qpair failed and we were unable to recover it. 00:32:57.795 [2024-11-19 11:00:36.860001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.795 [2024-11-19 11:00:36.860050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.795 [2024-11-19 11:00:36.860063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.795 [2024-11-19 11:00:36.860071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.795 [2024-11-19 11:00:36.860077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.795 [2024-11-19 11:00:36.860091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.795 qpair failed and we were unable to recover it. 00:32:57.795 [2024-11-19 11:00:36.870058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.795 [2024-11-19 11:00:36.870100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.795 [2024-11-19 11:00:36.870116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.795 [2024-11-19 11:00:36.870123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.795 [2024-11-19 11:00:36.870130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.795 [2024-11-19 11:00:36.870144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.795 qpair failed and we were unable to recover it. 00:32:57.795 [2024-11-19 11:00:36.880101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.795 [2024-11-19 11:00:36.880146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.795 [2024-11-19 11:00:36.880163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.795 [2024-11-19 11:00:36.880171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.795 [2024-11-19 11:00:36.880177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.795 [2024-11-19 11:00:36.880192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.795 qpair failed and we were unable to recover it. 00:32:57.795 [2024-11-19 11:00:36.890127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.795 [2024-11-19 11:00:36.890179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.795 [2024-11-19 11:00:36.890193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.795 [2024-11-19 11:00:36.890200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.795 [2024-11-19 11:00:36.890207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.795 [2024-11-19 11:00:36.890221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.795 qpair failed and we were unable to recover it. 00:32:57.795 [2024-11-19 11:00:36.900120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.795 [2024-11-19 11:00:36.900171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.795 [2024-11-19 11:00:36.900184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.795 [2024-11-19 11:00:36.900191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.795 [2024-11-19 11:00:36.900198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.795 [2024-11-19 11:00:36.900213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.795 qpair failed and we were unable to recover it. 00:32:57.795 [2024-11-19 11:00:36.910165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.795 [2024-11-19 11:00:36.910261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.795 [2024-11-19 11:00:36.910274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.795 [2024-11-19 11:00:36.910282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.795 [2024-11-19 11:00:36.910288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.795 [2024-11-19 11:00:36.910306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.795 qpair failed and we were unable to recover it. 00:32:57.795 [2024-11-19 11:00:36.920205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.795 [2024-11-19 11:00:36.920254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.795 [2024-11-19 11:00:36.920267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.795 [2024-11-19 11:00:36.920275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.795 [2024-11-19 11:00:36.920281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.795 [2024-11-19 11:00:36.920295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.795 qpair failed and we were unable to recover it. 00:32:57.795 [2024-11-19 11:00:36.930218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.795 [2024-11-19 11:00:36.930263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.795 [2024-11-19 11:00:36.930276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.796 [2024-11-19 11:00:36.930283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.796 [2024-11-19 11:00:36.930289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.796 [2024-11-19 11:00:36.930304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.796 qpair failed and we were unable to recover it. 00:32:57.796 [2024-11-19 11:00:36.940211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.796 [2024-11-19 11:00:36.940259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.796 [2024-11-19 11:00:36.940272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.796 [2024-11-19 11:00:36.940279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.796 [2024-11-19 11:00:36.940285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.796 [2024-11-19 11:00:36.940300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.796 qpair failed and we were unable to recover it. 00:32:57.796 [2024-11-19 11:00:36.950262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.796 [2024-11-19 11:00:36.950314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.796 [2024-11-19 11:00:36.950326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.796 [2024-11-19 11:00:36.950333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.796 [2024-11-19 11:00:36.950340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.796 [2024-11-19 11:00:36.950354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.796 qpair failed and we were unable to recover it. 00:32:57.796 [2024-11-19 11:00:36.960305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.796 [2024-11-19 11:00:36.960356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.796 [2024-11-19 11:00:36.960369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.796 [2024-11-19 11:00:36.960376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.796 [2024-11-19 11:00:36.960383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.796 [2024-11-19 11:00:36.960397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.796 qpair failed and we were unable to recover it. 00:32:57.796 [2024-11-19 11:00:36.970341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.796 [2024-11-19 11:00:36.970391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.796 [2024-11-19 11:00:36.970404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.796 [2024-11-19 11:00:36.970411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.796 [2024-11-19 11:00:36.970418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.796 [2024-11-19 11:00:36.970432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.796 qpair failed and we were unable to recover it. 00:32:57.796 [2024-11-19 11:00:36.980322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.796 [2024-11-19 11:00:36.980368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.796 [2024-11-19 11:00:36.980381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.796 [2024-11-19 11:00:36.980388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.796 [2024-11-19 11:00:36.980395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:57.796 [2024-11-19 11:00:36.980409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:57.796 qpair failed and we were unable to recover it. 00:32:58.057 [2024-11-19 11:00:36.990361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.057 [2024-11-19 11:00:36.990408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.057 [2024-11-19 11:00:36.990421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.057 [2024-11-19 11:00:36.990428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.057 [2024-11-19 11:00:36.990435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.057 [2024-11-19 11:00:36.990450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.057 qpair failed and we were unable to recover it. 00:32:58.057 [2024-11-19 11:00:37.000406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.057 [2024-11-19 11:00:37.000456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.057 [2024-11-19 11:00:37.000472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.057 [2024-11-19 11:00:37.000480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.057 [2024-11-19 11:00:37.000487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.057 [2024-11-19 11:00:37.000501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.057 qpair failed and we were unable to recover it. 00:32:58.057 [2024-11-19 11:00:37.010452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.057 [2024-11-19 11:00:37.010503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.057 [2024-11-19 11:00:37.010516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.058 [2024-11-19 11:00:37.010524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.058 [2024-11-19 11:00:37.010531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.058 [2024-11-19 11:00:37.010545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.058 qpair failed and we were unable to recover it. 00:32:58.058 [2024-11-19 11:00:37.020314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.058 [2024-11-19 11:00:37.020363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.058 [2024-11-19 11:00:37.020376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.058 [2024-11-19 11:00:37.020384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.058 [2024-11-19 11:00:37.020390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.058 [2024-11-19 11:00:37.020404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.058 qpair failed and we were unable to recover it. 00:32:58.058 [2024-11-19 11:00:37.030467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.058 [2024-11-19 11:00:37.030530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.058 [2024-11-19 11:00:37.030542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.058 [2024-11-19 11:00:37.030549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.058 [2024-11-19 11:00:37.030556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.058 [2024-11-19 11:00:37.030570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.058 qpair failed and we were unable to recover it. 00:32:58.058 [2024-11-19 11:00:37.040527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.058 [2024-11-19 11:00:37.040577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.058 [2024-11-19 11:00:37.040590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.058 [2024-11-19 11:00:37.040597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.058 [2024-11-19 11:00:37.040607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.058 [2024-11-19 11:00:37.040622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.058 qpair failed and we were unable to recover it. 00:32:58.058 [2024-11-19 11:00:37.050498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.058 [2024-11-19 11:00:37.050544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.058 [2024-11-19 11:00:37.050557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.058 [2024-11-19 11:00:37.050564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.058 [2024-11-19 11:00:37.050571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.058 [2024-11-19 11:00:37.050585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.058 qpair failed and we were unable to recover it. 00:32:58.058 [2024-11-19 11:00:37.060419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.058 [2024-11-19 11:00:37.060463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.058 [2024-11-19 11:00:37.060475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.058 [2024-11-19 11:00:37.060483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.058 [2024-11-19 11:00:37.060490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.058 [2024-11-19 11:00:37.060503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.058 qpair failed and we were unable to recover it. 00:32:58.058 [2024-11-19 11:00:37.070549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.058 [2024-11-19 11:00:37.070599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.058 [2024-11-19 11:00:37.070612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.058 [2024-11-19 11:00:37.070619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.058 [2024-11-19 11:00:37.070626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.058 [2024-11-19 11:00:37.070640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.058 qpair failed and we were unable to recover it. 00:32:58.058 [2024-11-19 11:00:37.080489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.058 [2024-11-19 11:00:37.080534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.058 [2024-11-19 11:00:37.080546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.058 [2024-11-19 11:00:37.080554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.058 [2024-11-19 11:00:37.080560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.058 [2024-11-19 11:00:37.080575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.058 qpair failed and we were unable to recover it. 00:32:58.058 [2024-11-19 11:00:37.090628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.058 [2024-11-19 11:00:37.090680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.058 [2024-11-19 11:00:37.090693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.058 [2024-11-19 11:00:37.090701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.058 [2024-11-19 11:00:37.090707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.058 [2024-11-19 11:00:37.090722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.058 qpair failed and we were unable to recover it. 00:32:58.058 [2024-11-19 11:00:37.100661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.058 [2024-11-19 11:00:37.100709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.058 [2024-11-19 11:00:37.100722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.058 [2024-11-19 11:00:37.100729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.058 [2024-11-19 11:00:37.100736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.058 [2024-11-19 11:00:37.100750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.058 qpair failed and we were unable to recover it. 00:32:58.058 [2024-11-19 11:00:37.110663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.058 [2024-11-19 11:00:37.110708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.058 [2024-11-19 11:00:37.110721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.058 [2024-11-19 11:00:37.110728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.058 [2024-11-19 11:00:37.110734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.058 [2024-11-19 11:00:37.110749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.058 qpair failed and we were unable to recover it. 00:32:58.058 [2024-11-19 11:00:37.120661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.058 [2024-11-19 11:00:37.120707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.058 [2024-11-19 11:00:37.120720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.058 [2024-11-19 11:00:37.120727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.058 [2024-11-19 11:00:37.120733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.058 [2024-11-19 11:00:37.120747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.058 qpair failed and we were unable to recover it. 00:32:58.058 [2024-11-19 11:00:37.130773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.059 [2024-11-19 11:00:37.130818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.059 [2024-11-19 11:00:37.130835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.059 [2024-11-19 11:00:37.130842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.059 [2024-11-19 11:00:37.130848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.059 [2024-11-19 11:00:37.130862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.059 qpair failed and we were unable to recover it. 00:32:58.059 [2024-11-19 11:00:37.140773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.059 [2024-11-19 11:00:37.140821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.059 [2024-11-19 11:00:37.140841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.059 [2024-11-19 11:00:37.140848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.059 [2024-11-19 11:00:37.140854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.059 [2024-11-19 11:00:37.140873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.059 qpair failed and we were unable to recover it. 00:32:58.059 [2024-11-19 11:00:37.150820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.059 [2024-11-19 11:00:37.150866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.059 [2024-11-19 11:00:37.150879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.059 [2024-11-19 11:00:37.150886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.059 [2024-11-19 11:00:37.150893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.059 [2024-11-19 11:00:37.150907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.059 qpair failed and we were unable to recover it. 00:32:58.059 [2024-11-19 11:00:37.160826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.059 [2024-11-19 11:00:37.160871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.059 [2024-11-19 11:00:37.160884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.059 [2024-11-19 11:00:37.160892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.059 [2024-11-19 11:00:37.160898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.059 [2024-11-19 11:00:37.160912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.059 qpair failed and we were unable to recover it. 00:32:58.059 [2024-11-19 11:00:37.170853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.059 [2024-11-19 11:00:37.170898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.059 [2024-11-19 11:00:37.170911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.059 [2024-11-19 11:00:37.170921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.059 [2024-11-19 11:00:37.170928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.059 [2024-11-19 11:00:37.170942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.059 qpair failed and we were unable to recover it. 00:32:58.059 [2024-11-19 11:00:37.180817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.059 [2024-11-19 11:00:37.180864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.059 [2024-11-19 11:00:37.180878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.059 [2024-11-19 11:00:37.180885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.059 [2024-11-19 11:00:37.180892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.059 [2024-11-19 11:00:37.180906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.059 qpair failed and we were unable to recover it. 00:32:58.059 [2024-11-19 11:00:37.190920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.059 [2024-11-19 11:00:37.190977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.059 [2024-11-19 11:00:37.190990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.059 [2024-11-19 11:00:37.190997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.059 [2024-11-19 11:00:37.191004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.059 [2024-11-19 11:00:37.191018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.059 qpair failed and we were unable to recover it. 00:32:58.059 [2024-11-19 11:00:37.200907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.059 [2024-11-19 11:00:37.200956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.059 [2024-11-19 11:00:37.200980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.059 [2024-11-19 11:00:37.200990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.059 [2024-11-19 11:00:37.200997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.059 [2024-11-19 11:00:37.201017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.059 qpair failed and we were unable to recover it. 00:32:58.059 [2024-11-19 11:00:37.210969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.059 [2024-11-19 11:00:37.211015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.059 [2024-11-19 11:00:37.211030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.059 [2024-11-19 11:00:37.211037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.059 [2024-11-19 11:00:37.211044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.059 [2024-11-19 11:00:37.211060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.059 qpair failed and we were unable to recover it. 00:32:58.059 [2024-11-19 11:00:37.220857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.059 [2024-11-19 11:00:37.220907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.059 [2024-11-19 11:00:37.220920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.059 [2024-11-19 11:00:37.220928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.059 [2024-11-19 11:00:37.220934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.059 [2024-11-19 11:00:37.220949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.059 qpair failed and we were unable to recover it. 00:32:58.059 [2024-11-19 11:00:37.230901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.059 [2024-11-19 11:00:37.230948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.059 [2024-11-19 11:00:37.230963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.059 [2024-11-19 11:00:37.230970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.059 [2024-11-19 11:00:37.230977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.059 [2024-11-19 11:00:37.230993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.059 qpair failed and we were unable to recover it. 00:32:58.059 [2024-11-19 11:00:37.241031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.059 [2024-11-19 11:00:37.241085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.059 [2024-11-19 11:00:37.241099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.059 [2024-11-19 11:00:37.241106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.059 [2024-11-19 11:00:37.241113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.059 [2024-11-19 11:00:37.241127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.059 qpair failed and we were unable to recover it. 00:32:58.322 [2024-11-19 11:00:37.251095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.322 [2024-11-19 11:00:37.251146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.322 [2024-11-19 11:00:37.251164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.322 [2024-11-19 11:00:37.251172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.322 [2024-11-19 11:00:37.251179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.322 [2024-11-19 11:00:37.251193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.322 qpair failed and we were unable to recover it. 00:32:58.322 [2024-11-19 11:00:37.261072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.322 [2024-11-19 11:00:37.261123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.322 [2024-11-19 11:00:37.261136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.322 [2024-11-19 11:00:37.261143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.322 [2024-11-19 11:00:37.261150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.322 [2024-11-19 11:00:37.261168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.322 qpair failed and we were unable to recover it. 00:32:58.322 [2024-11-19 11:00:37.271007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.322 [2024-11-19 11:00:37.271061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.322 [2024-11-19 11:00:37.271074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.322 [2024-11-19 11:00:37.271081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.322 [2024-11-19 11:00:37.271088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.322 [2024-11-19 11:00:37.271102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.322 qpair failed and we were unable to recover it. 00:32:58.322 [2024-11-19 11:00:37.281135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.322 [2024-11-19 11:00:37.281188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.322 [2024-11-19 11:00:37.281201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.322 [2024-11-19 11:00:37.281209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.323 [2024-11-19 11:00:37.281215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.323 [2024-11-19 11:00:37.281230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.323 qpair failed and we were unable to recover it. 00:32:58.323 [2024-11-19 11:00:37.291202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.323 [2024-11-19 11:00:37.291252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.323 [2024-11-19 11:00:37.291265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.323 [2024-11-19 11:00:37.291273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.323 [2024-11-19 11:00:37.291280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.323 [2024-11-19 11:00:37.291294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.323 qpair failed and we were unable to recover it. 00:32:58.323 [2024-11-19 11:00:37.301189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.323 [2024-11-19 11:00:37.301253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.323 [2024-11-19 11:00:37.301265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.323 [2024-11-19 11:00:37.301280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.323 [2024-11-19 11:00:37.301287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.323 [2024-11-19 11:00:37.301302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.323 qpair failed and we were unable to recover it. 00:32:58.323 [2024-11-19 11:00:37.311234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.323 [2024-11-19 11:00:37.311285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.323 [2024-11-19 11:00:37.311298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.323 [2024-11-19 11:00:37.311305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.323 [2024-11-19 11:00:37.311312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.323 [2024-11-19 11:00:37.311326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.323 qpair failed and we were unable to recover it. 00:32:58.323 [2024-11-19 11:00:37.321261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.323 [2024-11-19 11:00:37.321311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.323 [2024-11-19 11:00:37.321323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.323 [2024-11-19 11:00:37.321330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.323 [2024-11-19 11:00:37.321337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.323 [2024-11-19 11:00:37.321352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.323 qpair failed and we were unable to recover it. 00:32:58.323 [2024-11-19 11:00:37.331267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.323 [2024-11-19 11:00:37.331318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.323 [2024-11-19 11:00:37.331330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.323 [2024-11-19 11:00:37.331338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.323 [2024-11-19 11:00:37.331345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.323 [2024-11-19 11:00:37.331359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.323 qpair failed and we were unable to recover it. 00:32:58.323 [2024-11-19 11:00:37.341270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.323 [2024-11-19 11:00:37.341321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.323 [2024-11-19 11:00:37.341333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.323 [2024-11-19 11:00:37.341341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.323 [2024-11-19 11:00:37.341347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.323 [2024-11-19 11:00:37.341365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.323 qpair failed and we were unable to recover it. 00:32:58.323 [2024-11-19 11:00:37.351230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.323 [2024-11-19 11:00:37.351276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.323 [2024-11-19 11:00:37.351288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.323 [2024-11-19 11:00:37.351296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.323 [2024-11-19 11:00:37.351302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.323 [2024-11-19 11:00:37.351317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.323 qpair failed and we were unable to recover it. 00:32:58.323 [2024-11-19 11:00:37.361346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.323 [2024-11-19 11:00:37.361402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.323 [2024-11-19 11:00:37.361414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.323 [2024-11-19 11:00:37.361421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.323 [2024-11-19 11:00:37.361428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.323 [2024-11-19 11:00:37.361442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.323 qpair failed and we were unable to recover it. 00:32:58.323 [2024-11-19 11:00:37.371418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.323 [2024-11-19 11:00:37.371506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.323 [2024-11-19 11:00:37.371519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.323 [2024-11-19 11:00:37.371527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.323 [2024-11-19 11:00:37.371534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.323 [2024-11-19 11:00:37.371549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.323 qpair failed and we were unable to recover it. 00:32:58.323 [2024-11-19 11:00:37.381384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.323 [2024-11-19 11:00:37.381433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.323 [2024-11-19 11:00:37.381447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.323 [2024-11-19 11:00:37.381454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.323 [2024-11-19 11:00:37.381461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.323 [2024-11-19 11:00:37.381479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.323 qpair failed and we were unable to recover it. 00:32:58.323 [2024-11-19 11:00:37.391460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.323 [2024-11-19 11:00:37.391506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.323 [2024-11-19 11:00:37.391520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.323 [2024-11-19 11:00:37.391527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.323 [2024-11-19 11:00:37.391534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.323 [2024-11-19 11:00:37.391548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.323 qpair failed and we were unable to recover it. 00:32:58.323 [2024-11-19 11:00:37.401492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.323 [2024-11-19 11:00:37.401537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.323 [2024-11-19 11:00:37.401550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.323 [2024-11-19 11:00:37.401557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.323 [2024-11-19 11:00:37.401564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.323 [2024-11-19 11:00:37.401578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.323 qpair failed and we were unable to recover it. 00:32:58.323 [2024-11-19 11:00:37.411516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.324 [2024-11-19 11:00:37.411565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.324 [2024-11-19 11:00:37.411578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.324 [2024-11-19 11:00:37.411586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.324 [2024-11-19 11:00:37.411592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.324 [2024-11-19 11:00:37.411606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.324 qpair failed and we were unable to recover it. 00:32:58.324 [2024-11-19 11:00:37.421489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.324 [2024-11-19 11:00:37.421567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.324 [2024-11-19 11:00:37.421579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.324 [2024-11-19 11:00:37.421587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.324 [2024-11-19 11:00:37.421594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.324 [2024-11-19 11:00:37.421608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.324 qpair failed and we were unable to recover it. 00:32:58.324 [2024-11-19 11:00:37.431550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.324 [2024-11-19 11:00:37.431594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.324 [2024-11-19 11:00:37.431610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.324 [2024-11-19 11:00:37.431617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.324 [2024-11-19 11:00:37.431624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.324 [2024-11-19 11:00:37.431639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.324 qpair failed and we were unable to recover it. 00:32:58.324 [2024-11-19 11:00:37.441565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.324 [2024-11-19 11:00:37.441614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.324 [2024-11-19 11:00:37.441626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.324 [2024-11-19 11:00:37.441634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.324 [2024-11-19 11:00:37.441640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.324 [2024-11-19 11:00:37.441654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.324 qpair failed and we were unable to recover it. 00:32:58.324 [2024-11-19 11:00:37.451632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.324 [2024-11-19 11:00:37.451680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.324 [2024-11-19 11:00:37.451693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.324 [2024-11-19 11:00:37.451700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.324 [2024-11-19 11:00:37.451707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.324 [2024-11-19 11:00:37.451721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.324 qpair failed and we were unable to recover it. 00:32:58.324 [2024-11-19 11:00:37.461623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.324 [2024-11-19 11:00:37.461703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.324 [2024-11-19 11:00:37.461716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.324 [2024-11-19 11:00:37.461723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.324 [2024-11-19 11:00:37.461730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.324 [2024-11-19 11:00:37.461744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.324 qpair failed and we were unable to recover it. 00:32:58.324 [2024-11-19 11:00:37.471530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.324 [2024-11-19 11:00:37.471577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.324 [2024-11-19 11:00:37.471591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.324 [2024-11-19 11:00:37.471599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.324 [2024-11-19 11:00:37.471608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.324 [2024-11-19 11:00:37.471624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.324 qpair failed and we were unable to recover it. 00:32:58.324 [2024-11-19 11:00:37.481620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.324 [2024-11-19 11:00:37.481695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.324 [2024-11-19 11:00:37.481708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.324 [2024-11-19 11:00:37.481715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.324 [2024-11-19 11:00:37.481723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.324 [2024-11-19 11:00:37.481737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.324 qpair failed and we were unable to recover it. 00:32:58.324 [2024-11-19 11:00:37.491745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.324 [2024-11-19 11:00:37.491792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.324 [2024-11-19 11:00:37.491805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.324 [2024-11-19 11:00:37.491812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.324 [2024-11-19 11:00:37.491819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.324 [2024-11-19 11:00:37.491833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.324 qpair failed and we were unable to recover it. 00:32:58.324 [2024-11-19 11:00:37.501675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.324 [2024-11-19 11:00:37.501769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.324 [2024-11-19 11:00:37.501783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.324 [2024-11-19 11:00:37.501790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.324 [2024-11-19 11:00:37.501797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.324 [2024-11-19 11:00:37.501811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.324 qpair failed and we were unable to recover it. 00:32:58.324 [2024-11-19 11:00:37.511762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.324 [2024-11-19 11:00:37.511810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.324 [2024-11-19 11:00:37.511824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.324 [2024-11-19 11:00:37.511833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.324 [2024-11-19 11:00:37.511841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.324 [2024-11-19 11:00:37.511855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.324 qpair failed and we were unable to recover it. 00:32:58.587 [2024-11-19 11:00:37.521766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.587 [2024-11-19 11:00:37.521813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.587 [2024-11-19 11:00:37.521827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.587 [2024-11-19 11:00:37.521834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.587 [2024-11-19 11:00:37.521841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.587 [2024-11-19 11:00:37.521855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.587 qpair failed and we were unable to recover it. 00:32:58.587 [2024-11-19 11:00:37.531832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.587 [2024-11-19 11:00:37.531880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.587 [2024-11-19 11:00:37.531893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.587 [2024-11-19 11:00:37.531901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.587 [2024-11-19 11:00:37.531908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.587 [2024-11-19 11:00:37.531923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.587 qpair failed and we were unable to recover it. 00:32:58.587 [2024-11-19 11:00:37.541825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.587 [2024-11-19 11:00:37.541880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.587 [2024-11-19 11:00:37.541905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.587 [2024-11-19 11:00:37.541914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.587 [2024-11-19 11:00:37.541921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.587 [2024-11-19 11:00:37.541941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.587 qpair failed and we were unable to recover it. 00:32:58.587 [2024-11-19 11:00:37.551738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.587 [2024-11-19 11:00:37.551784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.587 [2024-11-19 11:00:37.551799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.587 [2024-11-19 11:00:37.551807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.587 [2024-11-19 11:00:37.551815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.587 [2024-11-19 11:00:37.551830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.587 qpair failed and we were unable to recover it. 00:32:58.587 [2024-11-19 11:00:37.561857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.587 [2024-11-19 11:00:37.561906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.587 [2024-11-19 11:00:37.561923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.587 [2024-11-19 11:00:37.561931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.587 [2024-11-19 11:00:37.561937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.587 [2024-11-19 11:00:37.561952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.587 qpair failed and we were unable to recover it. 00:32:58.587 [2024-11-19 11:00:37.571940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.587 [2024-11-19 11:00:37.571993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.587 [2024-11-19 11:00:37.572006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.587 [2024-11-19 11:00:37.572013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.587 [2024-11-19 11:00:37.572020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.587 [2024-11-19 11:00:37.572035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.587 qpair failed and we were unable to recover it. 00:32:58.587 [2024-11-19 11:00:37.581936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.587 [2024-11-19 11:00:37.581982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.587 [2024-11-19 11:00:37.581995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.588 [2024-11-19 11:00:37.582002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.588 [2024-11-19 11:00:37.582008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.588 [2024-11-19 11:00:37.582023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.588 qpair failed and we were unable to recover it. 00:32:58.588 [2024-11-19 11:00:37.592012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.588 [2024-11-19 11:00:37.592091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.588 [2024-11-19 11:00:37.592105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.588 [2024-11-19 11:00:37.592112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.588 [2024-11-19 11:00:37.592119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.588 [2024-11-19 11:00:37.592134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.588 qpair failed and we were unable to recover it. 00:32:58.588 [2024-11-19 11:00:37.601985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.588 [2024-11-19 11:00:37.602028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.588 [2024-11-19 11:00:37.602041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.588 [2024-11-19 11:00:37.602048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.588 [2024-11-19 11:00:37.602058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.588 [2024-11-19 11:00:37.602073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.588 qpair failed and we were unable to recover it. 00:32:58.588 [2024-11-19 11:00:37.612029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.588 [2024-11-19 11:00:37.612082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.588 [2024-11-19 11:00:37.612095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.588 [2024-11-19 11:00:37.612103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.588 [2024-11-19 11:00:37.612109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.588 [2024-11-19 11:00:37.612124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.588 qpair failed and we were unable to recover it. 00:32:58.588 [2024-11-19 11:00:37.622044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.588 [2024-11-19 11:00:37.622090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.588 [2024-11-19 11:00:37.622103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.588 [2024-11-19 11:00:37.622111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.588 [2024-11-19 11:00:37.622117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.588 [2024-11-19 11:00:37.622132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.588 qpair failed and we were unable to recover it. 00:32:58.588 [2024-11-19 11:00:37.632101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.588 [2024-11-19 11:00:37.632172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.588 [2024-11-19 11:00:37.632185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.588 [2024-11-19 11:00:37.632192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.588 [2024-11-19 11:00:37.632199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.588 [2024-11-19 11:00:37.632214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.588 qpair failed and we were unable to recover it. 00:32:58.588 [2024-11-19 11:00:37.642088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.588 [2024-11-19 11:00:37.642140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.588 [2024-11-19 11:00:37.642155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.588 [2024-11-19 11:00:37.642167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.588 [2024-11-19 11:00:37.642173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.588 [2024-11-19 11:00:37.642188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.588 qpair failed and we were unable to recover it. 00:32:58.588 [2024-11-19 11:00:37.652212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.588 [2024-11-19 11:00:37.652261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.588 [2024-11-19 11:00:37.652274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.588 [2024-11-19 11:00:37.652281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.588 [2024-11-19 11:00:37.652288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.588 [2024-11-19 11:00:37.652302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.588 qpair failed and we were unable to recover it. 00:32:58.588 [2024-11-19 11:00:37.662191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.588 [2024-11-19 11:00:37.662240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.588 [2024-11-19 11:00:37.662253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.588 [2024-11-19 11:00:37.662261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.588 [2024-11-19 11:00:37.662267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.588 [2024-11-19 11:00:37.662282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.588 qpair failed and we were unable to recover it. 00:32:58.588 [2024-11-19 11:00:37.672061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.588 [2024-11-19 11:00:37.672104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.588 [2024-11-19 11:00:37.672116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.588 [2024-11-19 11:00:37.672124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.588 [2024-11-19 11:00:37.672131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.588 [2024-11-19 11:00:37.672145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.588 qpair failed and we were unable to recover it. 00:32:58.588 [2024-11-19 11:00:37.682230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.588 [2024-11-19 11:00:37.682276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.588 [2024-11-19 11:00:37.682289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.588 [2024-11-19 11:00:37.682296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.588 [2024-11-19 11:00:37.682303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.588 [2024-11-19 11:00:37.682318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.588 qpair failed and we were unable to recover it. 00:32:58.588 [2024-11-19 11:00:37.692266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.588 [2024-11-19 11:00:37.692315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.588 [2024-11-19 11:00:37.692331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.588 [2024-11-19 11:00:37.692338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.588 [2024-11-19 11:00:37.692345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.588 [2024-11-19 11:00:37.692360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.588 qpair failed and we were unable to recover it. 00:32:58.588 [2024-11-19 11:00:37.702247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.588 [2024-11-19 11:00:37.702298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.588 [2024-11-19 11:00:37.702311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.588 [2024-11-19 11:00:37.702318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.588 [2024-11-19 11:00:37.702325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.588 [2024-11-19 11:00:37.702339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.588 qpair failed and we were unable to recover it. 00:32:58.588 [2024-11-19 11:00:37.712312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.588 [2024-11-19 11:00:37.712403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.588 [2024-11-19 11:00:37.712417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.589 [2024-11-19 11:00:37.712424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.589 [2024-11-19 11:00:37.712431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.589 [2024-11-19 11:00:37.712446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.589 qpair failed and we were unable to recover it. 00:32:58.589 [2024-11-19 11:00:37.722320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.589 [2024-11-19 11:00:37.722391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.589 [2024-11-19 11:00:37.722403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.589 [2024-11-19 11:00:37.722411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.589 [2024-11-19 11:00:37.722418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.589 [2024-11-19 11:00:37.722433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.589 qpair failed and we were unable to recover it. 00:32:58.589 [2024-11-19 11:00:37.732385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.589 [2024-11-19 11:00:37.732435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.589 [2024-11-19 11:00:37.732448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.589 [2024-11-19 11:00:37.732458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.589 [2024-11-19 11:00:37.732465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.589 [2024-11-19 11:00:37.732480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.589 qpair failed and we were unable to recover it. 00:32:58.589 [2024-11-19 11:00:37.742348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.589 [2024-11-19 11:00:37.742395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.589 [2024-11-19 11:00:37.742407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.589 [2024-11-19 11:00:37.742415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.589 [2024-11-19 11:00:37.742422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.589 [2024-11-19 11:00:37.742436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.589 qpair failed and we were unable to recover it. 00:32:58.589 [2024-11-19 11:00:37.752406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.589 [2024-11-19 11:00:37.752452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.589 [2024-11-19 11:00:37.752465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.589 [2024-11-19 11:00:37.752472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.589 [2024-11-19 11:00:37.752479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.589 [2024-11-19 11:00:37.752493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.589 qpair failed and we were unable to recover it. 00:32:58.589 [2024-11-19 11:00:37.762428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.589 [2024-11-19 11:00:37.762473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.589 [2024-11-19 11:00:37.762486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.589 [2024-11-19 11:00:37.762493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.589 [2024-11-19 11:00:37.762500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.589 [2024-11-19 11:00:37.762514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.589 qpair failed and we were unable to recover it. 00:32:58.589 [2024-11-19 11:00:37.772492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.589 [2024-11-19 11:00:37.772540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.589 [2024-11-19 11:00:37.772553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.589 [2024-11-19 11:00:37.772560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.589 [2024-11-19 11:00:37.772567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.589 [2024-11-19 11:00:37.772581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.589 qpair failed and we were unable to recover it. 00:32:58.852 [2024-11-19 11:00:37.782476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.852 [2024-11-19 11:00:37.782521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.852 [2024-11-19 11:00:37.782533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.852 [2024-11-19 11:00:37.782541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.852 [2024-11-19 11:00:37.782547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.852 [2024-11-19 11:00:37.782562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.852 qpair failed and we were unable to recover it. 00:32:58.852 [2024-11-19 11:00:37.792503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.852 [2024-11-19 11:00:37.792550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.852 [2024-11-19 11:00:37.792564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.852 [2024-11-19 11:00:37.792572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.852 [2024-11-19 11:00:37.792578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.852 [2024-11-19 11:00:37.792593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.852 qpair failed and we were unable to recover it. 00:32:58.852 [2024-11-19 11:00:37.802535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.852 [2024-11-19 11:00:37.802582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.852 [2024-11-19 11:00:37.802595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.852 [2024-11-19 11:00:37.802603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.852 [2024-11-19 11:00:37.802609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.852 [2024-11-19 11:00:37.802624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.852 qpair failed and we were unable to recover it. 00:32:58.852 [2024-11-19 11:00:37.812593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.852 [2024-11-19 11:00:37.812640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.852 [2024-11-19 11:00:37.812653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.852 [2024-11-19 11:00:37.812660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.852 [2024-11-19 11:00:37.812667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.852 [2024-11-19 11:00:37.812681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.852 qpair failed and we were unable to recover it. 00:32:58.852 [2024-11-19 11:00:37.822575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.852 [2024-11-19 11:00:37.822622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.852 [2024-11-19 11:00:37.822635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.852 [2024-11-19 11:00:37.822642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.852 [2024-11-19 11:00:37.822649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.852 [2024-11-19 11:00:37.822663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.852 qpair failed and we were unable to recover it. 00:32:58.852 [2024-11-19 11:00:37.832633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.852 [2024-11-19 11:00:37.832678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.852 [2024-11-19 11:00:37.832691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.852 [2024-11-19 11:00:37.832699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.852 [2024-11-19 11:00:37.832705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.852 [2024-11-19 11:00:37.832720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.852 qpair failed and we were unable to recover it. 00:32:58.852 [2024-11-19 11:00:37.842620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.852 [2024-11-19 11:00:37.842663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.852 [2024-11-19 11:00:37.842675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.852 [2024-11-19 11:00:37.842683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.852 [2024-11-19 11:00:37.842689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.852 [2024-11-19 11:00:37.842703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.852 qpair failed and we were unable to recover it. 00:32:58.852 [2024-11-19 11:00:37.852650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.852 [2024-11-19 11:00:37.852695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.852 [2024-11-19 11:00:37.852708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.852 [2024-11-19 11:00:37.852716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.852 [2024-11-19 11:00:37.852723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.852 [2024-11-19 11:00:37.852737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.852 qpair failed and we were unable to recover it. 00:32:58.852 [2024-11-19 11:00:37.862687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.853 [2024-11-19 11:00:37.862748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.853 [2024-11-19 11:00:37.862761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.853 [2024-11-19 11:00:37.862771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.853 [2024-11-19 11:00:37.862778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.853 [2024-11-19 11:00:37.862792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.853 qpair failed and we were unable to recover it. 00:32:58.853 [2024-11-19 11:00:37.872754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.853 [2024-11-19 11:00:37.872804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.853 [2024-11-19 11:00:37.872817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.853 [2024-11-19 11:00:37.872824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.853 [2024-11-19 11:00:37.872831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.853 [2024-11-19 11:00:37.872845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.853 qpair failed and we were unable to recover it. 00:32:58.853 [2024-11-19 11:00:37.882759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.853 [2024-11-19 11:00:37.882832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.853 [2024-11-19 11:00:37.882845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.853 [2024-11-19 11:00:37.882852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.853 [2024-11-19 11:00:37.882859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.853 [2024-11-19 11:00:37.882873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.853 qpair failed and we were unable to recover it. 00:32:58.853 [2024-11-19 11:00:37.892765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.853 [2024-11-19 11:00:37.892809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.853 [2024-11-19 11:00:37.892822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.853 [2024-11-19 11:00:37.892829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.853 [2024-11-19 11:00:37.892836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.853 [2024-11-19 11:00:37.892850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.853 qpair failed and we were unable to recover it. 00:32:58.853 [2024-11-19 11:00:37.902758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.853 [2024-11-19 11:00:37.902805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.853 [2024-11-19 11:00:37.902821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.853 [2024-11-19 11:00:37.902828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.853 [2024-11-19 11:00:37.902835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.853 [2024-11-19 11:00:37.902854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.853 qpair failed and we were unable to recover it. 00:32:58.853 [2024-11-19 11:00:37.912837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.853 [2024-11-19 11:00:37.912890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.853 [2024-11-19 11:00:37.912903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.853 [2024-11-19 11:00:37.912911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.853 [2024-11-19 11:00:37.912917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.853 [2024-11-19 11:00:37.912932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.853 qpair failed and we were unable to recover it. 00:32:58.853 [2024-11-19 11:00:37.922917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.853 [2024-11-19 11:00:37.922967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.853 [2024-11-19 11:00:37.922979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.853 [2024-11-19 11:00:37.922987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.853 [2024-11-19 11:00:37.922993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.853 [2024-11-19 11:00:37.923008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.853 qpair failed and we were unable to recover it. 00:32:58.853 [2024-11-19 11:00:37.932872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.853 [2024-11-19 11:00:37.932931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.853 [2024-11-19 11:00:37.932956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.853 [2024-11-19 11:00:37.932965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.853 [2024-11-19 11:00:37.932972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.853 [2024-11-19 11:00:37.932992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.853 qpair failed and we were unable to recover it. 00:32:58.853 [2024-11-19 11:00:37.942917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.853 [2024-11-19 11:00:37.942973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.853 [2024-11-19 11:00:37.942997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.853 [2024-11-19 11:00:37.943006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.853 [2024-11-19 11:00:37.943014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.853 [2024-11-19 11:00:37.943034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.853 qpair failed and we were unable to recover it. 00:32:58.853 [2024-11-19 11:00:37.952918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.853 [2024-11-19 11:00:37.952968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.853 [2024-11-19 11:00:37.952983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.853 [2024-11-19 11:00:37.952991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.853 [2024-11-19 11:00:37.952998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.853 [2024-11-19 11:00:37.953013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.853 qpair failed and we were unable to recover it. 00:32:58.853 [2024-11-19 11:00:37.962959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.853 [2024-11-19 11:00:37.963035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.853 [2024-11-19 11:00:37.963049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.853 [2024-11-19 11:00:37.963056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.853 [2024-11-19 11:00:37.963062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.853 [2024-11-19 11:00:37.963077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.853 qpair failed and we were unable to recover it. 00:32:58.853 [2024-11-19 11:00:37.972975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.853 [2024-11-19 11:00:37.973025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.853 [2024-11-19 11:00:37.973038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.853 [2024-11-19 11:00:37.973047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.853 [2024-11-19 11:00:37.973054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.853 [2024-11-19 11:00:37.973068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.853 qpair failed and we were unable to recover it. 00:32:58.853 [2024-11-19 11:00:37.983018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.853 [2024-11-19 11:00:37.983064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.853 [2024-11-19 11:00:37.983077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.853 [2024-11-19 11:00:37.983085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.853 [2024-11-19 11:00:37.983092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.853 [2024-11-19 11:00:37.983106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.853 qpair failed and we were unable to recover it. 00:32:58.853 [2024-11-19 11:00:37.993045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.854 [2024-11-19 11:00:37.993095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.854 [2024-11-19 11:00:37.993115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.854 [2024-11-19 11:00:37.993123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.854 [2024-11-19 11:00:37.993129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.854 [2024-11-19 11:00:37.993145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.854 qpair failed and we were unable to recover it. 00:32:58.854 [2024-11-19 11:00:38.003067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.854 [2024-11-19 11:00:38.003156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.854 [2024-11-19 11:00:38.003173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.854 [2024-11-19 11:00:38.003180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.854 [2024-11-19 11:00:38.003187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.854 [2024-11-19 11:00:38.003201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.854 qpair failed and we were unable to recover it. 00:32:58.854 [2024-11-19 11:00:38.013092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.854 [2024-11-19 11:00:38.013138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.854 [2024-11-19 11:00:38.013150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.854 [2024-11-19 11:00:38.013162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.854 [2024-11-19 11:00:38.013169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.854 [2024-11-19 11:00:38.013184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.854 qpair failed and we were unable to recover it. 00:32:58.854 [2024-11-19 11:00:38.023118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.854 [2024-11-19 11:00:38.023169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.854 [2024-11-19 11:00:38.023183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.854 [2024-11-19 11:00:38.023190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.854 [2024-11-19 11:00:38.023197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.854 [2024-11-19 11:00:38.023211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.854 qpair failed and we were unable to recover it. 00:32:58.854 [2024-11-19 11:00:38.033166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.854 [2024-11-19 11:00:38.033212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.854 [2024-11-19 11:00:38.033225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.854 [2024-11-19 11:00:38.033233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.854 [2024-11-19 11:00:38.033243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.854 [2024-11-19 11:00:38.033257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.854 qpair failed and we were unable to recover it. 00:32:58.854 [2024-11-19 11:00:38.043179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.854 [2024-11-19 11:00:38.043220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.854 [2024-11-19 11:00:38.043233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.854 [2024-11-19 11:00:38.043241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.854 [2024-11-19 11:00:38.043247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:58.854 [2024-11-19 11:00:38.043262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:58.854 qpair failed and we were unable to recover it. 00:32:59.116 [2024-11-19 11:00:38.053208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.116 [2024-11-19 11:00:38.053254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.116 [2024-11-19 11:00:38.053266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.117 [2024-11-19 11:00:38.053274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.117 [2024-11-19 11:00:38.053280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.117 [2024-11-19 11:00:38.053295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.117 qpair failed and we were unable to recover it. 00:32:59.117 [2024-11-19 11:00:38.063244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.117 [2024-11-19 11:00:38.063289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.117 [2024-11-19 11:00:38.063302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.117 [2024-11-19 11:00:38.063310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.117 [2024-11-19 11:00:38.063317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.117 [2024-11-19 11:00:38.063331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.117 qpair failed and we were unable to recover it. 00:32:59.117 [2024-11-19 11:00:38.073257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.117 [2024-11-19 11:00:38.073307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.117 [2024-11-19 11:00:38.073320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.117 [2024-11-19 11:00:38.073327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.117 [2024-11-19 11:00:38.073334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.117 [2024-11-19 11:00:38.073348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.117 qpair failed and we were unable to recover it. 00:32:59.117 [2024-11-19 11:00:38.083243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.117 [2024-11-19 11:00:38.083291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.117 [2024-11-19 11:00:38.083304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.117 [2024-11-19 11:00:38.083312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.117 [2024-11-19 11:00:38.083319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.117 [2024-11-19 11:00:38.083333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.117 qpair failed and we were unable to recover it. 00:32:59.117 [2024-11-19 11:00:38.093263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.117 [2024-11-19 11:00:38.093354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.117 [2024-11-19 11:00:38.093367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.117 [2024-11-19 11:00:38.093375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.117 [2024-11-19 11:00:38.093382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.117 [2024-11-19 11:00:38.093396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.117 qpair failed and we were unable to recover it. 00:32:59.117 [2024-11-19 11:00:38.103318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.117 [2024-11-19 11:00:38.103395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.117 [2024-11-19 11:00:38.103407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.117 [2024-11-19 11:00:38.103415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.117 [2024-11-19 11:00:38.103422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.117 [2024-11-19 11:00:38.103436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.117 qpair failed and we were unable to recover it. 00:32:59.117 [2024-11-19 11:00:38.113434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.117 [2024-11-19 11:00:38.113482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.117 [2024-11-19 11:00:38.113494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.117 [2024-11-19 11:00:38.113502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.117 [2024-11-19 11:00:38.113509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.117 [2024-11-19 11:00:38.113523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.117 qpair failed and we were unable to recover it. 00:32:59.117 [2024-11-19 11:00:38.123381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.117 [2024-11-19 11:00:38.123426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.117 [2024-11-19 11:00:38.123442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.117 [2024-11-19 11:00:38.123449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.117 [2024-11-19 11:00:38.123456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.117 [2024-11-19 11:00:38.123470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.117 qpair failed and we were unable to recover it. 00:32:59.117 [2024-11-19 11:00:38.133394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.117 [2024-11-19 11:00:38.133437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.117 [2024-11-19 11:00:38.133450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.117 [2024-11-19 11:00:38.133457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.117 [2024-11-19 11:00:38.133464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.117 [2024-11-19 11:00:38.133478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.117 qpair failed and we were unable to recover it. 00:32:59.117 [2024-11-19 11:00:38.143450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.117 [2024-11-19 11:00:38.143513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.117 [2024-11-19 11:00:38.143525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.117 [2024-11-19 11:00:38.143533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.117 [2024-11-19 11:00:38.143540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.117 [2024-11-19 11:00:38.143554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.117 qpair failed and we were unable to recover it. 00:32:59.117 [2024-11-19 11:00:38.153356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.117 [2024-11-19 11:00:38.153411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.117 [2024-11-19 11:00:38.153423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.117 [2024-11-19 11:00:38.153431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.117 [2024-11-19 11:00:38.153437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.117 [2024-11-19 11:00:38.153452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.117 qpair failed and we were unable to recover it. 00:32:59.117 [2024-11-19 11:00:38.163509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.117 [2024-11-19 11:00:38.163555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.117 [2024-11-19 11:00:38.163568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.117 [2024-11-19 11:00:38.163575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.118 [2024-11-19 11:00:38.163585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.118 [2024-11-19 11:00:38.163599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.118 qpair failed and we were unable to recover it. 00:32:59.118 [2024-11-19 11:00:38.173489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.118 [2024-11-19 11:00:38.173548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.118 [2024-11-19 11:00:38.173561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.118 [2024-11-19 11:00:38.173569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.118 [2024-11-19 11:00:38.173576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.118 [2024-11-19 11:00:38.173589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.118 qpair failed and we were unable to recover it. 00:32:59.118 [2024-11-19 11:00:38.183544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.118 [2024-11-19 11:00:38.183601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.118 [2024-11-19 11:00:38.183614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.118 [2024-11-19 11:00:38.183621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.118 [2024-11-19 11:00:38.183627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.118 [2024-11-19 11:00:38.183642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.118 qpair failed and we were unable to recover it. 00:32:59.118 [2024-11-19 11:00:38.193583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.118 [2024-11-19 11:00:38.193646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.118 [2024-11-19 11:00:38.193660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.118 [2024-11-19 11:00:38.193667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.118 [2024-11-19 11:00:38.193674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.118 [2024-11-19 11:00:38.193693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.118 qpair failed and we were unable to recover it. 00:32:59.118 [2024-11-19 11:00:38.203588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.118 [2024-11-19 11:00:38.203663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.118 [2024-11-19 11:00:38.203676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.118 [2024-11-19 11:00:38.203684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.118 [2024-11-19 11:00:38.203691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.118 [2024-11-19 11:00:38.203705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.118 qpair failed and we were unable to recover it. 00:32:59.118 [2024-11-19 11:00:38.213650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.118 [2024-11-19 11:00:38.213734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.118 [2024-11-19 11:00:38.213747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.118 [2024-11-19 11:00:38.213755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.118 [2024-11-19 11:00:38.213762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.118 [2024-11-19 11:00:38.213777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.118 qpair failed and we were unable to recover it. 00:32:59.118 [2024-11-19 11:00:38.223524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.118 [2024-11-19 11:00:38.223572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.118 [2024-11-19 11:00:38.223585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.118 [2024-11-19 11:00:38.223592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.118 [2024-11-19 11:00:38.223598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.118 [2024-11-19 11:00:38.223612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.118 qpair failed and we were unable to recover it. 00:32:59.118 [2024-11-19 11:00:38.233648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.118 [2024-11-19 11:00:38.233695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.118 [2024-11-19 11:00:38.233707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.118 [2024-11-19 11:00:38.233715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.118 [2024-11-19 11:00:38.233722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.118 [2024-11-19 11:00:38.233736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.118 qpair failed and we were unable to recover it. 00:32:59.118 [2024-11-19 11:00:38.243719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.118 [2024-11-19 11:00:38.243767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.118 [2024-11-19 11:00:38.243780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.118 [2024-11-19 11:00:38.243788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.118 [2024-11-19 11:00:38.243794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.118 [2024-11-19 11:00:38.243809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.118 qpair failed and we were unable to recover it. 00:32:59.118 [2024-11-19 11:00:38.253708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.118 [2024-11-19 11:00:38.253752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.118 [2024-11-19 11:00:38.253768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.118 [2024-11-19 11:00:38.253775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.118 [2024-11-19 11:00:38.253782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.118 [2024-11-19 11:00:38.253797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.118 qpair failed and we were unable to recover it. 00:32:59.118 [2024-11-19 11:00:38.263633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.118 [2024-11-19 11:00:38.263676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.118 [2024-11-19 11:00:38.263688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.118 [2024-11-19 11:00:38.263696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.118 [2024-11-19 11:00:38.263703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.118 [2024-11-19 11:00:38.263717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.118 qpair failed and we were unable to recover it. 00:32:59.118 [2024-11-19 11:00:38.273804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.118 [2024-11-19 11:00:38.273854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.118 [2024-11-19 11:00:38.273868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.118 [2024-11-19 11:00:38.273875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.118 [2024-11-19 11:00:38.273882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.118 [2024-11-19 11:00:38.273899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.118 qpair failed and we were unable to recover it. 00:32:59.118 [2024-11-19 11:00:38.283790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.118 [2024-11-19 11:00:38.283835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.118 [2024-11-19 11:00:38.283848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.118 [2024-11-19 11:00:38.283856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.118 [2024-11-19 11:00:38.283862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.118 [2024-11-19 11:00:38.283877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.118 qpair failed and we were unable to recover it. 00:32:59.118 [2024-11-19 11:00:38.293874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.118 [2024-11-19 11:00:38.293935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.118 [2024-11-19 11:00:38.293948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.118 [2024-11-19 11:00:38.293959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.119 [2024-11-19 11:00:38.293966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.119 [2024-11-19 11:00:38.293980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.119 qpair failed and we were unable to recover it. 00:32:59.119 [2024-11-19 11:00:38.303738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.119 [2024-11-19 11:00:38.303781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.119 [2024-11-19 11:00:38.303795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.119 [2024-11-19 11:00:38.303803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.119 [2024-11-19 11:00:38.303810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.119 [2024-11-19 11:00:38.303825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.119 qpair failed and we were unable to recover it. 00:32:59.381 [2024-11-19 11:00:38.313903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.381 [2024-11-19 11:00:38.313953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.381 [2024-11-19 11:00:38.313967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.381 [2024-11-19 11:00:38.313974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.381 [2024-11-19 11:00:38.313981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.381 [2024-11-19 11:00:38.313996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.381 qpair failed and we were unable to recover it. 00:32:59.381 [2024-11-19 11:00:38.323875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.381 [2024-11-19 11:00:38.323920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.381 [2024-11-19 11:00:38.323933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.381 [2024-11-19 11:00:38.323941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.381 [2024-11-19 11:00:38.323947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.381 [2024-11-19 11:00:38.323962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.381 qpair failed and we were unable to recover it. 00:32:59.381 [2024-11-19 11:00:38.333944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.381 [2024-11-19 11:00:38.333999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.381 [2024-11-19 11:00:38.334011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.381 [2024-11-19 11:00:38.334019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.381 [2024-11-19 11:00:38.334026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.381 [2024-11-19 11:00:38.334041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.381 qpair failed and we were unable to recover it. 00:32:59.381 [2024-11-19 11:00:38.343970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.381 [2024-11-19 11:00:38.344052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.381 [2024-11-19 11:00:38.344066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.381 [2024-11-19 11:00:38.344073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.381 [2024-11-19 11:00:38.344081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.381 [2024-11-19 11:00:38.344096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.381 qpair failed and we were unable to recover it. 00:32:59.381 [2024-11-19 11:00:38.354015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.381 [2024-11-19 11:00:38.354068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.381 [2024-11-19 11:00:38.354081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.381 [2024-11-19 11:00:38.354088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.381 [2024-11-19 11:00:38.354095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.381 [2024-11-19 11:00:38.354109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.381 qpair failed and we were unable to recover it. 00:32:59.381 [2024-11-19 11:00:38.364011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.381 [2024-11-19 11:00:38.364061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.381 [2024-11-19 11:00:38.364074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.381 [2024-11-19 11:00:38.364081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.381 [2024-11-19 11:00:38.364088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.381 [2024-11-19 11:00:38.364103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.381 qpair failed and we were unable to recover it. 00:32:59.381 [2024-11-19 11:00:38.374059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.381 [2024-11-19 11:00:38.374102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.382 [2024-11-19 11:00:38.374115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.382 [2024-11-19 11:00:38.374122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.382 [2024-11-19 11:00:38.374129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.382 [2024-11-19 11:00:38.374144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.382 qpair failed and we were unable to recover it. 00:32:59.382 [2024-11-19 11:00:38.384082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.382 [2024-11-19 11:00:38.384131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.382 [2024-11-19 11:00:38.384144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.382 [2024-11-19 11:00:38.384151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.382 [2024-11-19 11:00:38.384161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.382 [2024-11-19 11:00:38.384176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.382 qpair failed and we were unable to recover it. 00:32:59.382 [2024-11-19 11:00:38.393985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.382 [2024-11-19 11:00:38.394035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.382 [2024-11-19 11:00:38.394048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.382 [2024-11-19 11:00:38.394055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.382 [2024-11-19 11:00:38.394062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.382 [2024-11-19 11:00:38.394076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.382 qpair failed and we were unable to recover it. 00:32:59.382 [2024-11-19 11:00:38.404088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.382 [2024-11-19 11:00:38.404167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.382 [2024-11-19 11:00:38.404180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.382 [2024-11-19 11:00:38.404188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.382 [2024-11-19 11:00:38.404194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.382 [2024-11-19 11:00:38.404208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.382 qpair failed and we were unable to recover it. 00:32:59.382 [2024-11-19 11:00:38.414029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.382 [2024-11-19 11:00:38.414076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.382 [2024-11-19 11:00:38.414090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.382 [2024-11-19 11:00:38.414097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.382 [2024-11-19 11:00:38.414104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.382 [2024-11-19 11:00:38.414119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.382 qpair failed and we were unable to recover it. 00:32:59.382 [2024-11-19 11:00:38.424193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.382 [2024-11-19 11:00:38.424243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.382 [2024-11-19 11:00:38.424256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.382 [2024-11-19 11:00:38.424267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.382 [2024-11-19 11:00:38.424273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.382 [2024-11-19 11:00:38.424289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.382 qpair failed and we were unable to recover it. 00:32:59.382 [2024-11-19 11:00:38.434239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.382 [2024-11-19 11:00:38.434284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.382 [2024-11-19 11:00:38.434297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.382 [2024-11-19 11:00:38.434304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.382 [2024-11-19 11:00:38.434311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.382 [2024-11-19 11:00:38.434326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.382 qpair failed and we were unable to recover it. 00:32:59.382 [2024-11-19 11:00:38.444250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.382 [2024-11-19 11:00:38.444298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.382 [2024-11-19 11:00:38.444311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.382 [2024-11-19 11:00:38.444318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.382 [2024-11-19 11:00:38.444325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.382 [2024-11-19 11:00:38.444339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.382 qpair failed and we were unable to recover it. 00:32:59.382 [2024-11-19 11:00:38.454251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.382 [2024-11-19 11:00:38.454295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.382 [2024-11-19 11:00:38.454308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.382 [2024-11-19 11:00:38.454315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.382 [2024-11-19 11:00:38.454321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.382 [2024-11-19 11:00:38.454335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.382 qpair failed and we were unable to recover it. 00:32:59.382 [2024-11-19 11:00:38.464300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.382 [2024-11-19 11:00:38.464345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.382 [2024-11-19 11:00:38.464359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.382 [2024-11-19 11:00:38.464366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.382 [2024-11-19 11:00:38.464372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.382 [2024-11-19 11:00:38.464390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.382 qpair failed and we were unable to recover it. 00:32:59.382 [2024-11-19 11:00:38.474334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.382 [2024-11-19 11:00:38.474380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.382 [2024-11-19 11:00:38.474393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.382 [2024-11-19 11:00:38.474401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.382 [2024-11-19 11:00:38.474407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.382 [2024-11-19 11:00:38.474421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.382 qpair failed and we were unable to recover it. 00:32:59.382 [2024-11-19 11:00:38.484369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.382 [2024-11-19 11:00:38.484412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.382 [2024-11-19 11:00:38.484425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.382 [2024-11-19 11:00:38.484433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.382 [2024-11-19 11:00:38.484439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.382 [2024-11-19 11:00:38.484453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.382 qpair failed and we were unable to recover it. 00:32:59.382 [2024-11-19 11:00:38.494391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.382 [2024-11-19 11:00:38.494439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.382 [2024-11-19 11:00:38.494453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.382 [2024-11-19 11:00:38.494460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.382 [2024-11-19 11:00:38.494467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.382 [2024-11-19 11:00:38.494481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.382 qpair failed and we were unable to recover it. 00:32:59.382 [2024-11-19 11:00:38.504389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.383 [2024-11-19 11:00:38.504435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.383 [2024-11-19 11:00:38.504448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.383 [2024-11-19 11:00:38.504456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.383 [2024-11-19 11:00:38.504462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.383 [2024-11-19 11:00:38.504476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.383 qpair failed and we were unable to recover it. 00:32:59.383 [2024-11-19 11:00:38.514427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.383 [2024-11-19 11:00:38.514477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.383 [2024-11-19 11:00:38.514490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.383 [2024-11-19 11:00:38.514497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.383 [2024-11-19 11:00:38.514504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.383 [2024-11-19 11:00:38.514518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.383 qpair failed and we were unable to recover it. 00:32:59.383 [2024-11-19 11:00:38.524466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.383 [2024-11-19 11:00:38.524512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.383 [2024-11-19 11:00:38.524525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.383 [2024-11-19 11:00:38.524532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.383 [2024-11-19 11:00:38.524539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.383 [2024-11-19 11:00:38.524553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.383 qpair failed and we were unable to recover it. 00:32:59.383 [2024-11-19 11:00:38.534476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.383 [2024-11-19 11:00:38.534520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.383 [2024-11-19 11:00:38.534533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.383 [2024-11-19 11:00:38.534541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.383 [2024-11-19 11:00:38.534548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.383 [2024-11-19 11:00:38.534562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.383 qpair failed and we were unable to recover it. 00:32:59.383 [2024-11-19 11:00:38.544555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.383 [2024-11-19 11:00:38.544641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.383 [2024-11-19 11:00:38.544654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.383 [2024-11-19 11:00:38.544663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.383 [2024-11-19 11:00:38.544669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.383 [2024-11-19 11:00:38.544683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.383 qpair failed and we were unable to recover it. 00:32:59.383 [2024-11-19 11:00:38.554560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.383 [2024-11-19 11:00:38.554607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.383 [2024-11-19 11:00:38.554623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.383 [2024-11-19 11:00:38.554630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.383 [2024-11-19 11:00:38.554637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.383 [2024-11-19 11:00:38.554651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.383 qpair failed and we were unable to recover it. 00:32:59.383 [2024-11-19 11:00:38.564571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.383 [2024-11-19 11:00:38.564619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.383 [2024-11-19 11:00:38.564632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.383 [2024-11-19 11:00:38.564640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.383 [2024-11-19 11:00:38.564646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8410000b90 00:32:59.383 [2024-11-19 11:00:38.564661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:59.383 qpair failed and we were unable to recover it. 00:32:59.383 [2024-11-19 11:00:38.565068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153be00 is same with the state(6) to be set 00:32:59.383 [2024-11-19 11:00:38.574589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.383 [2024-11-19 11:00:38.574730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.383 [2024-11-19 11:00:38.574780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.383 [2024-11-19 11:00:38.574804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.644 [2024-11-19 11:00:38.574825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f840c000b90 00:32:59.644 [2024-11-19 11:00:38.574876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.644 qpair failed and we were unable to recover it. 00:32:59.644 [2024-11-19 11:00:38.584636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.644 [2024-11-19 11:00:38.584708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.645 [2024-11-19 11:00:38.584752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.645 [2024-11-19 11:00:38.584770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.645 [2024-11-19 11:00:38.584784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f840c000b90 00:32:59.645 [2024-11-19 11:00:38.584821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.645 qpair failed and we were unable to recover it. 00:32:59.645 [2024-11-19 11:00:38.594717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.645 [2024-11-19 11:00:38.594814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.645 [2024-11-19 11:00:38.594872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.645 [2024-11-19 11:00:38.594906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.645 [2024-11-19 11:00:38.594925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15460c0 00:32:59.645 [2024-11-19 11:00:38.594973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.645 qpair failed and we were unable to recover it. 00:32:59.645 [2024-11-19 11:00:38.604601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.645 [2024-11-19 11:00:38.604687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.645 [2024-11-19 11:00:38.604722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.645 [2024-11-19 11:00:38.604743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.645 [2024-11-19 11:00:38.604761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15460c0 00:32:59.645 [2024-11-19 11:00:38.604798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:59.645 qpair failed and we were unable to recover it. 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Write completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Write completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Write completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Write completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Write completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Write completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Write completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Write completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Write completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Write completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 Read completed with error (sct=0, sc=8) 00:32:59.645 starting I/O failed 00:32:59.645 [2024-11-19 11:00:38.605702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:59.645 [2024-11-19 11:00:38.614722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.645 [2024-11-19 11:00:38.614853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.645 [2024-11-19 11:00:38.614901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.645 [2024-11-19 11:00:38.614925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.645 [2024-11-19 11:00:38.614964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8418000b90 00:32:59.645 [2024-11-19 11:00:38.615015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:59.645 qpair failed and we were unable to recover it. 00:32:59.645 [2024-11-19 11:00:38.624665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.645 [2024-11-19 11:00:38.624792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.645 [2024-11-19 11:00:38.624844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.645 [2024-11-19 11:00:38.624865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.645 [2024-11-19 11:00:38.624882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8418000b90 00:32:59.645 [2024-11-19 11:00:38.624942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:59.645 qpair failed and we were unable to recover it. 00:32:59.645 [2024-11-19 11:00:38.625535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153be00 (9): Bad file descriptor 00:32:59.645 Initializing NVMe Controllers 00:32:59.645 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:59.645 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:59.645 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:59.645 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:59.645 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:59.645 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:59.645 Initialization complete. Launching workers. 00:32:59.645 Starting thread on core 1 00:32:59.645 Starting thread on core 2 00:32:59.645 Starting thread on core 3 00:32:59.645 Starting thread on core 0 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:32:59.645 00:32:59.645 real 0m11.494s 00:32:59.645 user 0m21.857s 00:32:59.645 sys 0m3.857s 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:59.645 ************************************ 00:32:59.645 END TEST nvmf_target_disconnect_tc2 00:32:59.645 ************************************ 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:59.645 rmmod nvme_tcp 00:32:59.645 rmmod nvme_fabrics 00:32:59.645 rmmod nvme_keyring 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1214802 ']' 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1214802 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1214802 ']' 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1214802 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1214802 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:32:59.645 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:32:59.646 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1214802' 00:32:59.646 killing process with pid 1214802 00:32:59.646 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1214802 00:32:59.646 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1214802 00:32:59.906 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:59.906 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:59.906 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:59.906 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:32:59.906 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:32:59.906 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:32:59.906 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:59.906 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:59.906 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:59.906 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.906 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:59.906 11:00:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.450 11:00:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:02.450 00:33:02.450 real 0m21.826s 00:33:02.450 user 0m49.841s 00:33:02.450 sys 0m10.052s 00:33:02.450 11:00:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:02.450 11:00:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:02.450 ************************************ 00:33:02.450 END TEST nvmf_target_disconnect 00:33:02.450 ************************************ 00:33:02.450 11:00:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:02.450 00:33:02.450 real 6m31.239s 00:33:02.450 user 11m23.797s 00:33:02.451 sys 2m14.770s 00:33:02.451 11:00:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:02.451 11:00:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.451 ************************************ 00:33:02.451 END TEST nvmf_host 00:33:02.451 ************************************ 00:33:02.451 11:00:41 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:33:02.451 11:00:41 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:33:02.451 11:00:41 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:33:02.451 11:00:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:02.451 11:00:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:02.451 11:00:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:02.451 ************************************ 00:33:02.451 START TEST nvmf_target_core_interrupt_mode 00:33:02.451 ************************************ 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:33:02.451 * Looking for test storage... 00:33:02.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:02.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.451 --rc genhtml_branch_coverage=1 00:33:02.451 --rc genhtml_function_coverage=1 00:33:02.451 --rc genhtml_legend=1 00:33:02.451 --rc geninfo_all_blocks=1 00:33:02.451 --rc geninfo_unexecuted_blocks=1 00:33:02.451 00:33:02.451 ' 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:02.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.451 --rc genhtml_branch_coverage=1 00:33:02.451 --rc genhtml_function_coverage=1 00:33:02.451 --rc genhtml_legend=1 00:33:02.451 --rc geninfo_all_blocks=1 00:33:02.451 --rc geninfo_unexecuted_blocks=1 00:33:02.451 00:33:02.451 ' 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:02.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.451 --rc genhtml_branch_coverage=1 00:33:02.451 --rc genhtml_function_coverage=1 00:33:02.451 --rc genhtml_legend=1 00:33:02.451 --rc geninfo_all_blocks=1 00:33:02.451 --rc geninfo_unexecuted_blocks=1 00:33:02.451 00:33:02.451 ' 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:02.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.451 --rc genhtml_branch_coverage=1 00:33:02.451 --rc genhtml_function_coverage=1 00:33:02.451 --rc genhtml_legend=1 00:33:02.451 --rc geninfo_all_blocks=1 00:33:02.451 --rc geninfo_unexecuted_blocks=1 00:33:02.451 00:33:02.451 ' 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.451 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:02.452 ************************************ 00:33:02.452 START TEST nvmf_abort 00:33:02.452 ************************************ 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:33:02.452 * Looking for test storage... 00:33:02.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:02.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.452 --rc genhtml_branch_coverage=1 00:33:02.452 --rc genhtml_function_coverage=1 00:33:02.452 --rc genhtml_legend=1 00:33:02.452 --rc geninfo_all_blocks=1 00:33:02.452 --rc geninfo_unexecuted_blocks=1 00:33:02.452 00:33:02.452 ' 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:02.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.452 --rc genhtml_branch_coverage=1 00:33:02.452 --rc genhtml_function_coverage=1 00:33:02.452 --rc genhtml_legend=1 00:33:02.452 --rc geninfo_all_blocks=1 00:33:02.452 --rc geninfo_unexecuted_blocks=1 00:33:02.452 00:33:02.452 ' 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:02.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.452 --rc genhtml_branch_coverage=1 00:33:02.452 --rc genhtml_function_coverage=1 00:33:02.452 --rc genhtml_legend=1 00:33:02.452 --rc geninfo_all_blocks=1 00:33:02.452 --rc geninfo_unexecuted_blocks=1 00:33:02.452 00:33:02.452 ' 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:02.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.452 --rc genhtml_branch_coverage=1 00:33:02.452 --rc genhtml_function_coverage=1 00:33:02.452 --rc genhtml_legend=1 00:33:02.452 --rc geninfo_all_blocks=1 00:33:02.452 --rc geninfo_unexecuted_blocks=1 00:33:02.452 00:33:02.452 ' 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:02.452 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:02.713 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:02.713 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:02.713 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:02.713 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:02.713 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:02.713 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:02.713 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:02.713 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:33:02.713 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:02.713 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:02.713 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:02.713 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.713 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.713 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.713 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:33:02.714 11:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:10.852 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:10.853 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:10.853 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:10.853 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:10.853 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:10.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:10.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:33:10.853 00:33:10.853 --- 10.0.0.2 ping statistics --- 00:33:10.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.853 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:10.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:10.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:33:10.853 00:33:10.853 --- 10.0.0.1 ping statistics --- 00:33:10.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.853 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.853 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:10.854 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:10.854 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.854 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:10.854 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:10.854 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:33:10.854 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:10.854 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:10.854 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:10.854 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1220319 00:33:10.854 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1220319 00:33:10.854 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:33:10.854 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1220319 ']' 00:33:10.854 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.854 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.854 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.854 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.854 11:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:10.854 [2024-11-19 11:00:48.955017] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:10.854 [2024-11-19 11:00:48.955993] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:33:10.854 [2024-11-19 11:00:48.956034] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:10.854 [2024-11-19 11:00:49.051940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:10.854 [2024-11-19 11:00:49.087668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:10.854 [2024-11-19 11:00:49.087701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:10.854 [2024-11-19 11:00:49.087709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:10.854 [2024-11-19 11:00:49.087716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:10.854 [2024-11-19 11:00:49.087721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:10.854 [2024-11-19 11:00:49.089035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:10.854 [2024-11-19 11:00:49.089201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:10.854 [2024-11-19 11:00:49.089210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:10.854 [2024-11-19 11:00:49.144242] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:10.854 [2024-11-19 11:00:49.145199] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:10.854 [2024-11-19 11:00:49.146033] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:10.854 [2024-11-19 11:00:49.146140] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:10.854 [2024-11-19 11:00:49.790101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:10.854 Malloc0 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:10.854 Delay0 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:10.854 [2024-11-19 11:00:49.890053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.854 11:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:33:10.854 [2024-11-19 11:00:50.034082] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:13.403 Initializing NVMe Controllers 00:33:13.403 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:13.403 controller IO queue size 128 less than required 00:33:13.403 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:33:13.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:33:13.403 Initialization complete. Launching workers. 00:33:13.403 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27779 00:33:13.403 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27836, failed to submit 66 00:33:13.403 success 27779, unsuccessful 57, failed 0 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:13.403 rmmod nvme_tcp 00:33:13.403 rmmod nvme_fabrics 00:33:13.403 rmmod nvme_keyring 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1220319 ']' 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1220319 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1220319 ']' 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1220319 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1220319 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1220319' 00:33:13.403 killing process with pid 1220319 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1220319 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1220319 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:13.403 11:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.319 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:15.319 00:33:15.319 real 0m13.062s 00:33:15.319 user 0m10.823s 00:33:15.319 sys 0m6.621s 00:33:15.319 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:15.319 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:15.319 ************************************ 00:33:15.319 END TEST nvmf_abort 00:33:15.319 ************************************ 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:15.581 ************************************ 00:33:15.581 START TEST nvmf_ns_hotplug_stress 00:33:15.581 ************************************ 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:33:15.581 * Looking for test storage... 00:33:15.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:15.581 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:15.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.581 --rc genhtml_branch_coverage=1 00:33:15.581 --rc genhtml_function_coverage=1 00:33:15.581 --rc genhtml_legend=1 00:33:15.581 --rc geninfo_all_blocks=1 00:33:15.581 --rc geninfo_unexecuted_blocks=1 00:33:15.581 00:33:15.581 ' 00:33:15.843 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:15.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.843 --rc genhtml_branch_coverage=1 00:33:15.843 --rc genhtml_function_coverage=1 00:33:15.843 --rc genhtml_legend=1 00:33:15.843 --rc geninfo_all_blocks=1 00:33:15.843 --rc geninfo_unexecuted_blocks=1 00:33:15.843 00:33:15.843 ' 00:33:15.843 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:15.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.844 --rc genhtml_branch_coverage=1 00:33:15.844 --rc genhtml_function_coverage=1 00:33:15.844 --rc genhtml_legend=1 00:33:15.844 --rc geninfo_all_blocks=1 00:33:15.844 --rc geninfo_unexecuted_blocks=1 00:33:15.844 00:33:15.844 ' 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:15.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.844 --rc genhtml_branch_coverage=1 00:33:15.844 --rc genhtml_function_coverage=1 00:33:15.844 --rc genhtml_legend=1 00:33:15.844 --rc geninfo_all_blocks=1 00:33:15.844 --rc geninfo_unexecuted_blocks=1 00:33:15.844 00:33:15.844 ' 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:33:15.844 11:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:23.989 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:23.989 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:23.989 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:23.989 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:23.989 11:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:23.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:23.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:33:23.989 00:33:23.989 --- 10.0.0.2 ping statistics --- 00:33:23.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.989 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:23.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:23.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:33:23.989 00:33:23.989 --- 10.0.0.1 ping statistics --- 00:33:23.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.989 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1225254 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1225254 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1225254 ']' 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:23.989 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:23.989 [2024-11-19 11:01:02.119350] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:23.990 [2024-11-19 11:01:02.120312] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:33:23.990 [2024-11-19 11:01:02.120347] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:23.990 [2024-11-19 11:01:02.214415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:23.990 [2024-11-19 11:01:02.250071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:23.990 [2024-11-19 11:01:02.250108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:23.990 [2024-11-19 11:01:02.250116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:23.990 [2024-11-19 11:01:02.250123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:23.990 [2024-11-19 11:01:02.250129] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:23.990 [2024-11-19 11:01:02.251449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:23.990 [2024-11-19 11:01:02.251598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.990 [2024-11-19 11:01:02.251598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:23.990 [2024-11-19 11:01:02.306181] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:23.990 [2024-11-19 11:01:02.307178] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:23.990 [2024-11-19 11:01:02.308181] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:23.990 [2024-11-19 11:01:02.308269] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:23.990 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:23.990 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:33:23.990 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:23.990 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:23.990 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:23.990 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:23.990 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:33:23.990 11:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:23.990 [2024-11-19 11:01:03.124405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:23.990 11:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:24.251 11:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:24.512 [2024-11-19 11:01:03.529355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.512 11:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:24.772 11:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:33:24.772 Malloc0 00:33:25.032 11:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:25.032 Delay0 00:33:25.032 11:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:25.293 11:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:33:25.554 NULL1 00:33:25.554 11:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:33:25.815 11:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1225631 00:33:25.815 11:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:33:25.815 11:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:25.815 11:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:26.758 Read completed with error (sct=0, sc=11) 00:33:26.758 11:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:26.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:27.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:27.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:27.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:27.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:27.019 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:33:27.019 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:33:27.280 true 00:33:27.280 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:27.280 11:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:28.223 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:28.223 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:33:28.223 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:33:28.484 true 00:33:28.484 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:28.484 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:28.745 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:28.745 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:33:28.745 11:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:33:29.006 true 00:33:29.006 11:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:29.006 11:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:30.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:30.389 11:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:30.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:30.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:30.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:30.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:30.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:30.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:30.389 11:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:33:30.389 11:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:33:30.389 true 00:33:30.389 11:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:30.389 11:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:31.328 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:31.588 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:33:31.588 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:33:31.588 true 00:33:31.588 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:31.588 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:31.848 11:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:32.109 11:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:33:32.109 11:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:33:32.109 true 00:33:32.109 11:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:32.109 11:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:33.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.494 11:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:33.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:33.495 11:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:33:33.495 11:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:33:33.755 true 00:33:33.755 11:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:33.755 11:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:34.695 11:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:34.695 11:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:33:34.695 11:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:33:34.957 true 00:33:34.957 11:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:34.957 11:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:35.218 11:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:35.218 11:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:33:35.218 11:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:33:35.478 true 00:33:35.478 11:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:35.478 11:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:36.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:36.862 11:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:36.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:36.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:36.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:36.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:36.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:36.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:36.862 11:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:33:36.862 11:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:33:36.862 true 00:33:36.862 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:36.862 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:37.805 11:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:38.065 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:33:38.065 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:33:38.065 true 00:33:38.065 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:38.066 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:38.326 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:38.586 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:33:38.586 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:33:38.586 true 00:33:38.847 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:38.847 11:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:39.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:39.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:39.787 11:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:40.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.047 11:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:33:40.047 11:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:33:40.308 true 00:33:40.308 11:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:40.308 11:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:41.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:41.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:41.249 11:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:41.249 11:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:33:41.249 11:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:33:41.509 true 00:33:41.509 11:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:41.509 11:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:41.770 11:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:41.770 11:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:33:41.770 11:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:33:42.031 true 00:33:42.031 11:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:42.031 11:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:42.292 11:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:42.292 11:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:33:42.292 11:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:33:42.552 true 00:33:42.552 11:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:42.552 11:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:42.813 11:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:42.813 11:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:33:42.813 11:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:33:43.073 true 00:33:43.074 11:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:43.074 11:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:43.334 11:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:43.334 11:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:33:43.334 11:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:33:43.594 true 00:33:43.594 11:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:43.594 11:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:43.855 11:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:44.115 11:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:33:44.115 11:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:33:44.115 true 00:33:44.115 11:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:44.115 11:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:44.375 11:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:44.635 11:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:33:44.636 11:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:33:44.636 true 00:33:44.636 11:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:44.636 11:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:44.896 11:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:45.157 11:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:33:45.157 11:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:33:45.157 true 00:33:45.157 11:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:45.157 11:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:45.418 11:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:45.678 11:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:33:45.678 11:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:33:45.678 true 00:33:45.678 11:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:45.678 11:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:45.938 11:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:46.198 11:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:33:46.199 11:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:33:46.459 true 00:33:46.459 11:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:46.459 11:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:47.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:47.400 11:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:47.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:47.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:47.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:47.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:47.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:47.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:47.660 11:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:33:47.660 11:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:33:47.922 true 00:33:47.922 11:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:47.922 11:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:48.866 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:48.866 11:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:48.866 11:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:33:48.866 11:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:33:49.126 true 00:33:49.126 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:49.126 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:49.126 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:49.388 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:33:49.388 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:33:49.650 true 00:33:49.650 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:49.650 11:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:50.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:50.862 11:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:50.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:50.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:50.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:50.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:50.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:50.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:50.862 11:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:33:50.862 11:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:33:51.124 true 00:33:51.124 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:51.124 11:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:52.067 11:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:52.067 11:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:33:52.067 11:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:33:52.329 true 00:33:52.329 11:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:52.329 11:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:52.591 11:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:52.591 11:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:33:52.591 11:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:33:52.852 true 00:33:52.852 11:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:52.852 11:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:54.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:54.239 11:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:54.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:54.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:54.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:54.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:54.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:54.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:54.239 11:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:33:54.239 11:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:33:54.239 true 00:33:54.500 11:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:54.500 11:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:55.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:55.335 11:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:55.335 11:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:33:55.335 11:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:33:55.597 true 00:33:55.597 11:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:55.597 11:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:55.859 11:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:55.859 Initializing NVMe Controllers 00:33:55.859 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:55.859 Controller IO queue size 128, less than required. 00:33:55.859 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:55.859 Controller IO queue size 128, less than required. 00:33:55.859 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:55.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:55.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:55.859 Initialization complete. Launching workers. 00:33:55.859 ======================================================== 00:33:55.859 Latency(us) 00:33:55.859 Device Information : IOPS MiB/s Average min max 00:33:55.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2041.64 1.00 38031.06 1610.39 1018249.86 00:33:55.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17743.09 8.66 7214.27 1173.50 406219.27 00:33:55.859 ======================================================== 00:33:55.859 Total : 19784.73 9.66 10394.34 1173.50 1018249.86 00:33:55.859 00:33:55.859 11:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:33:55.859 11:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:33:56.120 true 00:33:56.121 11:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1225631 00:33:56.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1225631) - No such process 00:33:56.121 11:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1225631 00:33:56.121 11:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:56.382 11:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:56.382 11:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:33:56.382 11:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:33:56.382 11:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:33:56.382 11:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:56.382 11:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:33:56.643 null0 00:33:56.643 11:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:56.643 11:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:56.643 11:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:33:56.904 null1 00:33:56.904 11:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:56.904 11:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:56.904 11:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:33:56.904 null2 00:33:56.904 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:56.904 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:56.904 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:33:57.166 null3 00:33:57.166 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:57.166 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:57.166 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:33:57.427 null4 00:33:57.427 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:57.427 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:57.427 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:33:57.427 null5 00:33:57.427 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:57.427 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:57.427 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:33:57.689 null6 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:33:57.689 null7 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:33:57.689 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1231805 1231807 1231810 1231812 1231815 1231818 1231821 1231822 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:57.690 11:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:57.952 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:57.952 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:57.952 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:57.952 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:57.952 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:57.952 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:57.952 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:57.952 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:58.214 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:58.477 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:58.477 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:58.477 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:58.477 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:58.477 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:58.477 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:58.477 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:58.477 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:58.477 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:58.477 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:58.477 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:58.477 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:58.477 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:58.477 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:58.477 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:58.738 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:59.000 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:59.000 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.000 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.000 11:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:59.000 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.000 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.000 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:59.000 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:59.000 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.000 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.000 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:59.000 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.000 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.000 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:59.000 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:59.000 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.000 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.000 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:59.000 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.000 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.000 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:59.000 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:59.262 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.524 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.524 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:59.524 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.524 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.524 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:59.524 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:59.524 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:59.524 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:59.524 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:59.524 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:59.524 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:59.524 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:59.524 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.524 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.524 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:59.524 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.524 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.524 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:59.785 11:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.047 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.309 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.310 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.570 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:00.831 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.831 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.831 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:00.831 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.831 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.831 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:00.831 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:00.831 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.831 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.831 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:00.831 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:00.831 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.831 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.831 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:00.831 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:00.831 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:00.831 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:00.831 11:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:00.831 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:00.831 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.831 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:01.094 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:01.094 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:01.094 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.094 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.094 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:01.094 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.094 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.095 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:01.095 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.095 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.095 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:01.095 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.095 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.095 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:01.095 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.095 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.095 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:01.095 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:01.095 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.095 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.095 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:01.095 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.095 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.095 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:01.095 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:01.095 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:01.356 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:01.356 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:01.356 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:01.356 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.356 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.356 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:01.356 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:01.356 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:01.356 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.356 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.356 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:01.356 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.356 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.357 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:01.357 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.357 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.357 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.357 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.357 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.357 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.357 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:01.617 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.617 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.617 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.617 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.618 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:01.618 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:01.618 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.618 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.618 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.618 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.878 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.878 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.878 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:34:01.878 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:34:01.878 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:01.878 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:34:01.878 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:01.878 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:34:01.878 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:01.878 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:01.878 rmmod nvme_tcp 00:34:01.878 rmmod nvme_fabrics 00:34:01.878 rmmod nvme_keyring 00:34:01.878 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:01.878 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:34:01.878 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:34:01.878 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1225254 ']' 00:34:01.879 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1225254 00:34:01.879 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1225254 ']' 00:34:01.879 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1225254 00:34:01.879 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:34:01.879 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:01.879 11:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1225254 00:34:01.879 11:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:01.879 11:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:01.879 11:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1225254' 00:34:01.879 killing process with pid 1225254 00:34:01.879 11:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1225254 00:34:01.879 11:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1225254 00:34:02.140 11:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:02.140 11:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:02.140 11:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:02.140 11:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:34:02.140 11:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:34:02.140 11:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:02.140 11:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:34:02.140 11:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:02.140 11:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:02.140 11:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:02.140 11:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:02.140 11:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.052 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:04.052 00:34:04.052 real 0m48.632s 00:34:04.052 user 2m57.296s 00:34:04.052 sys 0m19.886s 00:34:04.052 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:04.052 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:04.052 ************************************ 00:34:04.052 END TEST nvmf_ns_hotplug_stress 00:34:04.052 ************************************ 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:04.314 ************************************ 00:34:04.314 START TEST nvmf_delete_subsystem 00:34:04.314 ************************************ 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:34:04.314 * Looking for test storage... 00:34:04.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:34:04.314 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:04.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.315 --rc genhtml_branch_coverage=1 00:34:04.315 --rc genhtml_function_coverage=1 00:34:04.315 --rc genhtml_legend=1 00:34:04.315 --rc geninfo_all_blocks=1 00:34:04.315 --rc geninfo_unexecuted_blocks=1 00:34:04.315 00:34:04.315 ' 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:04.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.315 --rc genhtml_branch_coverage=1 00:34:04.315 --rc genhtml_function_coverage=1 00:34:04.315 --rc genhtml_legend=1 00:34:04.315 --rc geninfo_all_blocks=1 00:34:04.315 --rc geninfo_unexecuted_blocks=1 00:34:04.315 00:34:04.315 ' 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:04.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.315 --rc genhtml_branch_coverage=1 00:34:04.315 --rc genhtml_function_coverage=1 00:34:04.315 --rc genhtml_legend=1 00:34:04.315 --rc geninfo_all_blocks=1 00:34:04.315 --rc geninfo_unexecuted_blocks=1 00:34:04.315 00:34:04.315 ' 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:04.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.315 --rc genhtml_branch_coverage=1 00:34:04.315 --rc genhtml_function_coverage=1 00:34:04.315 --rc genhtml_legend=1 00:34:04.315 --rc geninfo_all_blocks=1 00:34:04.315 --rc geninfo_unexecuted_blocks=1 00:34:04.315 00:34:04.315 ' 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:04.315 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:04.577 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:04.577 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:04.577 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:04.577 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:04.577 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:04.577 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:04.577 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:04.577 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:34:04.577 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:04.577 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:04.577 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:34:04.578 11:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:12.726 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.726 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:12.727 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:12.727 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:12.727 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:12.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:12.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:34:12.727 00:34:12.727 --- 10.0.0.2 ping statistics --- 00:34:12.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.727 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:12.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:12.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:34:12.727 00:34:12.727 --- 10.0.0.1 ping statistics --- 00:34:12.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.727 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1236889 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1236889 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1236889 ']' 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:12.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:12.727 11:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:12.727 [2024-11-19 11:01:51.045933] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:12.727 [2024-11-19 11:01:51.047104] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:34:12.727 [2024-11-19 11:01:51.047166] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.727 [2024-11-19 11:01:51.149202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:12.728 [2024-11-19 11:01:51.200649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:12.728 [2024-11-19 11:01:51.200699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:12.728 [2024-11-19 11:01:51.200708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:12.728 [2024-11-19 11:01:51.200715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:12.728 [2024-11-19 11:01:51.200722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:12.728 [2024-11-19 11:01:51.202292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:12.728 [2024-11-19 11:01:51.202453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:12.728 [2024-11-19 11:01:51.278885] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:12.728 [2024-11-19 11:01:51.279385] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:12.728 [2024-11-19 11:01:51.279720] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:12.728 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:12.728 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:34:12.728 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:12.728 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:12.728 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:12.728 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:12.728 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:12.728 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.728 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:12.728 [2024-11-19 11:01:51.919371] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:12.989 [2024-11-19 11:01:51.951962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:12.989 NULL1 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:12.989 Delay0 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1237233 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:34:12.989 11:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:34:12.989 [2024-11-19 11:01:52.073841] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:34:14.907 11:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:14.907 11:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.907 11:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 [2024-11-19 11:01:54.238833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e2c0 is same with the state(6) to be set 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 starting I/O failed: -6 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 [2024-11-19 11:01:54.243896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f954c000c40 is same with the state(6) to be set 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.169 Write completed with error (sct=0, sc=8) 00:34:15.169 Read completed with error (sct=0, sc=8) 00:34:15.170 Write completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Write completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Write completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Write completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Write completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Write completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Write completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Write completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Write completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Write completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:15.170 Write completed with error (sct=0, sc=8) 00:34:15.170 Read completed with error (sct=0, sc=8) 00:34:16.116 [2024-11-19 11:01:55.214807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69f9a0 is same with the state(6) to be set 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 [2024-11-19 11:01:55.242094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e4a0 is same with the state(6) to be set 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 [2024-11-19 11:01:55.242511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e860 is same with the state(6) to be set 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 [2024-11-19 11:01:55.246286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f954c00d020 is same with the state(6) to be set 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Write completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 Read completed with error (sct=0, sc=8) 00:34:16.116 [2024-11-19 11:01:55.246383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f954c00d7c0 is same with the state(6) to be set 00:34:16.116 Initializing NVMe Controllers 00:34:16.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:16.116 Controller IO queue size 128, less than required. 00:34:16.116 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:16.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:16.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:16.116 Initialization complete. Launching workers. 00:34:16.116 ======================================================== 00:34:16.116 Latency(us) 00:34:16.116 Device Information : IOPS MiB/s Average min max 00:34:16.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.19 0.08 912003.36 380.15 1007568.90 00:34:16.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.69 0.08 934803.22 327.17 2002832.30 00:34:16.116 ======================================================== 00:34:16.116 Total : 323.88 0.16 923385.77 327.17 2002832.30 00:34:16.116 00:34:16.116 [2024-11-19 11:01:55.246921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69f9a0 (9): Bad file descriptor 00:34:16.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:34:16.116 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.116 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:34:16.116 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1237233 00:34:16.116 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1237233 00:34:16.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1237233) - No such process 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1237233 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1237233 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1237233 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:16.686 [2024-11-19 11:01:55.779776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1237901 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1237901 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:34:16.686 11:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:16.686 [2024-11-19 11:01:55.876035] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:34:17.257 11:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:17.257 11:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1237901 00:34:17.257 11:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:17.829 11:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:17.829 11:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1237901 00:34:17.829 11:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:18.403 11:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:18.403 11:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1237901 00:34:18.403 11:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:18.665 11:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:18.665 11:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1237901 00:34:18.665 11:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:19.236 11:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:19.236 11:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1237901 00:34:19.236 11:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:19.810 11:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:19.810 11:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1237901 00:34:19.810 11:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:19.810 Initializing NVMe Controllers 00:34:19.810 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:19.810 Controller IO queue size 128, less than required. 00:34:19.810 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:19.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:19.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:19.810 Initialization complete. Launching workers. 00:34:19.810 ======================================================== 00:34:19.810 Latency(us) 00:34:19.810 Device Information : IOPS MiB/s Average min max 00:34:19.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002307.86 1000191.20 1006000.34 00:34:19.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004431.35 1000436.01 1042319.77 00:34:19.810 ======================================================== 00:34:19.810 Total : 256.00 0.12 1003369.60 1000191.20 1042319.77 00:34:19.810 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1237901 00:34:20.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1237901) - No such process 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1237901 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:20.380 rmmod nvme_tcp 00:34:20.380 rmmod nvme_fabrics 00:34:20.380 rmmod nvme_keyring 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1236889 ']' 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1236889 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1236889 ']' 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1236889 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1236889 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1236889' 00:34:20.380 killing process with pid 1236889 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1236889 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1236889 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:20.380 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:34:20.641 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:20.641 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:20.641 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.641 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:20.642 11:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.557 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:22.557 00:34:22.557 real 0m18.365s 00:34:22.557 user 0m26.622s 00:34:22.557 sys 0m7.461s 00:34:22.557 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:22.557 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:22.557 ************************************ 00:34:22.558 END TEST nvmf_delete_subsystem 00:34:22.558 ************************************ 00:34:22.558 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:34:22.558 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:22.558 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:22.558 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:22.558 ************************************ 00:34:22.558 START TEST nvmf_host_management 00:34:22.558 ************************************ 00:34:22.558 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:34:22.832 * Looking for test storage... 00:34:22.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:22.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.832 --rc genhtml_branch_coverage=1 00:34:22.832 --rc genhtml_function_coverage=1 00:34:22.832 --rc genhtml_legend=1 00:34:22.832 --rc geninfo_all_blocks=1 00:34:22.832 --rc geninfo_unexecuted_blocks=1 00:34:22.832 00:34:22.832 ' 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:22.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.832 --rc genhtml_branch_coverage=1 00:34:22.832 --rc genhtml_function_coverage=1 00:34:22.832 --rc genhtml_legend=1 00:34:22.832 --rc geninfo_all_blocks=1 00:34:22.832 --rc geninfo_unexecuted_blocks=1 00:34:22.832 00:34:22.832 ' 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:22.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.832 --rc genhtml_branch_coverage=1 00:34:22.832 --rc genhtml_function_coverage=1 00:34:22.832 --rc genhtml_legend=1 00:34:22.832 --rc geninfo_all_blocks=1 00:34:22.832 --rc geninfo_unexecuted_blocks=1 00:34:22.832 00:34:22.832 ' 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:22.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.832 --rc genhtml_branch_coverage=1 00:34:22.832 --rc genhtml_function_coverage=1 00:34:22.832 --rc genhtml_legend=1 00:34:22.832 --rc geninfo_all_blocks=1 00:34:22.832 --rc geninfo_unexecuted_blocks=1 00:34:22.832 00:34:22.832 ' 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:22.832 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:34:22.833 11:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:31.104 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:31.104 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:34:31.104 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:31.104 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:31.104 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:31.104 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:31.104 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:31.104 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:34:31.104 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:31.104 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:34:31.104 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:31.105 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:31.105 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:31.105 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:31.105 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:31.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:31.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:34:31.105 00:34:31.105 --- 10.0.0.2 ping statistics --- 00:34:31.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.105 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:34:31.105 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:31.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:31.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:34:31.105 00:34:31.105 --- 10.0.0.1 ping statistics --- 00:34:31.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.105 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1242676 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1242676 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1242676 ']' 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:31.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:31.106 11:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:31.106 [2024-11-19 11:02:09.515627] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:31.106 [2024-11-19 11:02:09.516731] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:34:31.106 [2024-11-19 11:02:09.516783] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:31.106 [2024-11-19 11:02:09.616858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:31.106 [2024-11-19 11:02:09.670047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:31.106 [2024-11-19 11:02:09.670099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:31.106 [2024-11-19 11:02:09.670109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:31.106 [2024-11-19 11:02:09.670116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:31.106 [2024-11-19 11:02:09.670123] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:31.106 [2024-11-19 11:02:09.672136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:31.106 [2024-11-19 11:02:09.672299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:31.106 [2024-11-19 11:02:09.672450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:31.106 [2024-11-19 11:02:09.672450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:31.106 [2024-11-19 11:02:09.750011] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:31.106 [2024-11-19 11:02:09.751210] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:31.106 [2024-11-19 11:02:09.751296] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:31.106 [2024-11-19 11:02:09.751898] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:31.106 [2024-11-19 11:02:09.751939] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:31.367 [2024-11-19 11:02:10.381575] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:31.367 Malloc0 00:34:31.367 [2024-11-19 11:02:10.493784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1242963 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1242963 /var/tmp/bdevperf.sock 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1242963 ']' 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:31.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:31.367 { 00:34:31.367 "params": { 00:34:31.367 "name": "Nvme$subsystem", 00:34:31.367 "trtype": "$TEST_TRANSPORT", 00:34:31.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:31.367 "adrfam": "ipv4", 00:34:31.367 "trsvcid": "$NVMF_PORT", 00:34:31.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:31.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:31.367 "hdgst": ${hdgst:-false}, 00:34:31.367 "ddgst": ${ddgst:-false} 00:34:31.367 }, 00:34:31.367 "method": "bdev_nvme_attach_controller" 00:34:31.367 } 00:34:31.367 EOF 00:34:31.367 )") 00:34:31.367 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:34:31.628 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:34:31.628 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:34:31.628 11:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:31.628 "params": { 00:34:31.628 "name": "Nvme0", 00:34:31.628 "trtype": "tcp", 00:34:31.628 "traddr": "10.0.0.2", 00:34:31.628 "adrfam": "ipv4", 00:34:31.628 "trsvcid": "4420", 00:34:31.628 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:31.628 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:31.628 "hdgst": false, 00:34:31.628 "ddgst": false 00:34:31.628 }, 00:34:31.628 "method": "bdev_nvme_attach_controller" 00:34:31.628 }' 00:34:31.628 [2024-11-19 11:02:10.604467] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:34:31.628 [2024-11-19 11:02:10.604538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242963 ] 00:34:31.628 [2024-11-19 11:02:10.696882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:31.628 [2024-11-19 11:02:10.749802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:31.889 Running I/O for 10 seconds... 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=573 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 573 -ge 100 ']' 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.464 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:32.464 [2024-11-19 11:02:11.509487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd942a0 is same with the state(6) to be set 00:34:32.464 [2024-11-19 11:02:11.509566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd942a0 is same with the state(6) to be set 00:34:32.464 [2024-11-19 11:02:11.511000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.464 [2024-11-19 11:02:11.511065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.464 [2024-11-19 11:02:11.511087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.465 [2024-11-19 11:02:11.511819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.465 [2024-11-19 11:02:11.511828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.511840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.511848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.511859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.511868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.511878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.511885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.511895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.511903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.511914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.511922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.511932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.511939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.511949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.511956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.511966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.511974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.511984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.511991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.512000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.512007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.512017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.512025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.512035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.512043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.512052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.512062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.512074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.512083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.512093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.512101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.512111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.512119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.512129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.512137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.512148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.512155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.512172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.512179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.512189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.512196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.512206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.512215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.512225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.512233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.512242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.512249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.512261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-19 11:02:11.512270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.513607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:32.466 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.466 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:34:32.466 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.466 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:32.466 task offset: 87168 on job bdev=Nvme0n1 fails 00:34:32.466 00:34:32.466 Latency(us) 00:34:32.466 [2024-11-19T10:02:11.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:32.466 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:32.466 Job: Nvme0n1 ended in about 0.43 seconds with error 00:34:32.466 Verification LBA range: start 0x0 length 0x400 00:34:32.466 Nvme0n1 : 0.43 1505.62 94.10 147.34 0.00 37572.68 1774.93 33641.81 00:34:32.466 [2024-11-19T10:02:11.661Z] =================================================================================================================== 00:34:32.466 [2024-11-19T10:02:11.661Z] Total : 1505.62 94.10 147.34 0.00 37572.68 1774.93 33641.81 00:34:32.466 [2024-11-19 11:02:11.515847] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:32.466 [2024-11-19 11:02:11.515892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d19000 (9): Bad file descriptor 00:34:32.466 [2024-11-19 11:02:11.517539] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:34:32.466 [2024-11-19 11:02:11.517632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:34:32.466 [2024-11-19 11:02:11.517661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.466 [2024-11-19 11:02:11.517679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:34:32.466 [2024-11-19 11:02:11.517689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:34:32.466 [2024-11-19 11:02:11.517697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.466 [2024-11-19 11:02:11.517705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d19000 00:34:32.466 [2024-11-19 11:02:11.517728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d19000 (9): Bad file descriptor 00:34:32.466 [2024-11-19 11:02:11.517742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:32.466 [2024-11-19 11:02:11.517750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:32.466 [2024-11-19 11:02:11.517760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:32.466 [2024-11-19 11:02:11.517771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:32.466 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.466 11:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:34:33.408 11:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1242963 00:34:33.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1242963) - No such process 00:34:33.408 11:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:34:33.408 11:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:34:33.409 11:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:34:33.409 11:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:34:33.409 11:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:34:33.409 11:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:34:33.409 11:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:33.409 11:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:33.409 { 00:34:33.409 "params": { 00:34:33.409 "name": "Nvme$subsystem", 00:34:33.409 "trtype": "$TEST_TRANSPORT", 00:34:33.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.409 "adrfam": "ipv4", 00:34:33.409 "trsvcid": "$NVMF_PORT", 00:34:33.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.409 "hdgst": ${hdgst:-false}, 00:34:33.409 "ddgst": ${ddgst:-false} 00:34:33.409 }, 00:34:33.409 "method": "bdev_nvme_attach_controller" 00:34:33.409 } 00:34:33.409 EOF 00:34:33.409 )") 00:34:33.409 11:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:34:33.409 11:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:34:33.409 11:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:34:33.409 11:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:33.409 "params": { 00:34:33.409 "name": "Nvme0", 00:34:33.409 "trtype": "tcp", 00:34:33.409 "traddr": "10.0.0.2", 00:34:33.409 "adrfam": "ipv4", 00:34:33.409 "trsvcid": "4420", 00:34:33.409 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:33.409 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:33.409 "hdgst": false, 00:34:33.409 "ddgst": false 00:34:33.409 }, 00:34:33.409 "method": "bdev_nvme_attach_controller" 00:34:33.409 }' 00:34:33.409 [2024-11-19 11:02:12.589686] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:34:33.409 [2024-11-19 11:02:12.589764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243319 ] 00:34:33.669 [2024-11-19 11:02:12.681375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:33.669 [2024-11-19 11:02:12.733016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:33.929 Running I/O for 1 seconds... 00:34:34.869 1895.00 IOPS, 118.44 MiB/s 00:34:34.869 Latency(us) 00:34:34.869 [2024-11-19T10:02:14.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:34.869 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:34.869 Verification LBA range: start 0x0 length 0x400 00:34:34.869 Nvme0n1 : 1.01 1946.00 121.63 0.00 0.00 32204.77 2362.03 31894.19 00:34:34.869 [2024-11-19T10:02:14.064Z] =================================================================================================================== 00:34:34.869 [2024-11-19T10:02:14.064Z] Total : 1946.00 121.63 0.00 0.00 32204.77 2362.03 31894.19 00:34:35.129 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:34:35.129 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:34:35.129 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:34:35.129 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:35.129 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:34:35.129 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:35.129 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:34:35.129 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:35.129 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:34:35.129 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:35.129 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:35.129 rmmod nvme_tcp 00:34:35.129 rmmod nvme_fabrics 00:34:35.129 rmmod nvme_keyring 00:34:35.129 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:35.129 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:34:35.130 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:34:35.130 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1242676 ']' 00:34:35.130 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1242676 00:34:35.130 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1242676 ']' 00:34:35.130 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1242676 00:34:35.130 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:34:35.130 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:35.130 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1242676 00:34:35.130 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:35.130 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:35.130 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1242676' 00:34:35.130 killing process with pid 1242676 00:34:35.130 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1242676 00:34:35.130 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1242676 00:34:35.390 [2024-11-19 11:02:14.339146] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:34:35.390 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:35.390 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:35.390 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:35.390 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:34:35.390 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:34:35.390 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:35.390 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:34:35.390 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:35.390 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:35.391 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.391 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:35.391 11:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.305 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:37.305 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:34:37.305 00:34:37.305 real 0m14.707s 00:34:37.305 user 0m19.579s 00:34:37.305 sys 0m7.433s 00:34:37.305 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:37.305 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:37.305 ************************************ 00:34:37.305 END TEST nvmf_host_management 00:34:37.305 ************************************ 00:34:37.305 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:34:37.305 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:37.305 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:37.305 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:37.567 ************************************ 00:34:37.567 START TEST nvmf_lvol 00:34:37.567 ************************************ 00:34:37.567 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:34:37.567 * Looking for test storage... 00:34:37.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:37.567 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:37.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.568 --rc genhtml_branch_coverage=1 00:34:37.568 --rc genhtml_function_coverage=1 00:34:37.568 --rc genhtml_legend=1 00:34:37.568 --rc geninfo_all_blocks=1 00:34:37.568 --rc geninfo_unexecuted_blocks=1 00:34:37.568 00:34:37.568 ' 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:37.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.568 --rc genhtml_branch_coverage=1 00:34:37.568 --rc genhtml_function_coverage=1 00:34:37.568 --rc genhtml_legend=1 00:34:37.568 --rc geninfo_all_blocks=1 00:34:37.568 --rc geninfo_unexecuted_blocks=1 00:34:37.568 00:34:37.568 ' 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:37.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.568 --rc genhtml_branch_coverage=1 00:34:37.568 --rc genhtml_function_coverage=1 00:34:37.568 --rc genhtml_legend=1 00:34:37.568 --rc geninfo_all_blocks=1 00:34:37.568 --rc geninfo_unexecuted_blocks=1 00:34:37.568 00:34:37.568 ' 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:37.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.568 --rc genhtml_branch_coverage=1 00:34:37.568 --rc genhtml_function_coverage=1 00:34:37.568 --rc genhtml_legend=1 00:34:37.568 --rc geninfo_all_blocks=1 00:34:37.568 --rc geninfo_unexecuted_blocks=1 00:34:37.568 00:34:37.568 ' 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:37.568 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:37.830 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:34:37.830 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:37.830 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:37.830 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:37.830 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.830 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.830 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.830 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:34:37.830 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.830 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:34:37.830 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:37.830 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:37.830 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:37.830 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:37.830 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:37.830 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:37.830 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:37.830 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:34:37.831 11:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:45.975 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:45.975 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:34:45.975 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:45.975 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:45.975 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:45.975 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:45.975 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:45.975 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:34:45.975 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:45.976 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:45.976 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:45.976 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:45.976 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:45.976 11:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:45.976 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:45.976 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:45.976 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:45.976 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:45.976 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:45.976 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:45.976 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:45.976 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:45.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:45.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:34:45.976 00:34:45.976 --- 10.0.0.2 ping statistics --- 00:34:45.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:45.976 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:34:45.976 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:45.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:45.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:34:45.976 00:34:45.976 --- 10.0.0.1 ping statistics --- 00:34:45.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:45.976 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1247820 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1247820 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1247820 ']' 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:45.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:45.977 [2024-11-19 11:02:24.304555] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:45.977 [2024-11-19 11:02:24.305680] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:34:45.977 [2024-11-19 11:02:24.305732] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:45.977 [2024-11-19 11:02:24.379874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:45.977 [2024-11-19 11:02:24.427039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:45.977 [2024-11-19 11:02:24.427090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:45.977 [2024-11-19 11:02:24.427097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:45.977 [2024-11-19 11:02:24.427102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:45.977 [2024-11-19 11:02:24.427107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:45.977 [2024-11-19 11:02:24.428733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:45.977 [2024-11-19 11:02:24.428895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.977 [2024-11-19 11:02:24.428897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:45.977 [2024-11-19 11:02:24.500863] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:45.977 [2024-11-19 11:02:24.501558] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:45.977 [2024-11-19 11:02:24.502667] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:45.977 [2024-11-19 11:02:24.502713] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:45.977 [2024-11-19 11:02:24.741733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:34:45.977 11:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:46.239 11:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:34:46.239 11:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:34:46.239 11:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:34:46.500 11:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f9d5b853-f7de-44eb-80e8-54b04def261e 00:34:46.500 11:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f9d5b853-f7de-44eb-80e8-54b04def261e lvol 20 00:34:46.762 11:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=543c0ab1-36f1-4183-b7e0-c443120277a1 00:34:46.762 11:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:47.023 11:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 543c0ab1-36f1-4183-b7e0-c443120277a1 00:34:47.023 11:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:47.284 [2024-11-19 11:02:26.297635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:47.284 11:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:47.546 11:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1248347 00:34:47.546 11:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:34:47.546 11:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:34:48.489 11:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 543c0ab1-36f1-4183-b7e0-c443120277a1 MY_SNAPSHOT 00:34:48.751 11:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f29ace9f-0d94-4220-8294-7758713c6474 00:34:48.751 11:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 543c0ab1-36f1-4183-b7e0-c443120277a1 30 00:34:49.014 11:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f29ace9f-0d94-4220-8294-7758713c6474 MY_CLONE 00:34:49.276 11:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=231fd033-718c-4b3f-9dc8-1f6def2c865d 00:34:49.276 11:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 231fd033-718c-4b3f-9dc8-1f6def2c865d 00:34:49.537 11:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1248347 00:34:57.675 Initializing NVMe Controllers 00:34:57.675 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:34:57.675 Controller IO queue size 128, less than required. 00:34:57.675 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:57.675 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:34:57.675 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:34:57.675 Initialization complete. Launching workers. 00:34:57.675 ======================================================== 00:34:57.675 Latency(us) 00:34:57.675 Device Information : IOPS MiB/s Average min max 00:34:57.675 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 14835.30 57.95 8630.69 1885.30 61362.98 00:34:57.675 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15146.40 59.17 8452.01 1096.38 115005.51 00:34:57.675 ======================================================== 00:34:57.675 Total : 29981.70 117.12 8540.42 1096.38 115005.51 00:34:57.675 00:34:57.675 11:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:57.936 11:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 543c0ab1-36f1-4183-b7e0-c443120277a1 00:34:57.936 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f9d5b853-f7de-44eb-80e8-54b04def261e 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:58.198 rmmod nvme_tcp 00:34:58.198 rmmod nvme_fabrics 00:34:58.198 rmmod nvme_keyring 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1247820 ']' 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1247820 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1247820 ']' 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1247820 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1247820 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1247820' 00:34:58.198 killing process with pid 1247820 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1247820 00:34:58.198 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1247820 00:34:58.459 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:58.459 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:58.459 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:58.459 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:34:58.459 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:34:58.459 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:58.459 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:34:58.459 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:58.459 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:58.459 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:58.459 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:58.459 11:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.373 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:00.643 00:35:00.643 real 0m23.034s 00:35:00.643 user 0m55.119s 00:35:00.643 sys 0m10.652s 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:00.643 ************************************ 00:35:00.643 END TEST nvmf_lvol 00:35:00.643 ************************************ 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:00.643 ************************************ 00:35:00.643 START TEST nvmf_lvs_grow 00:35:00.643 ************************************ 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:35:00.643 * Looking for test storage... 00:35:00.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:35:00.643 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:35:00.644 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:00.644 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:35:00.905 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:35:00.905 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:00.905 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:00.905 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:00.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.906 --rc genhtml_branch_coverage=1 00:35:00.906 --rc genhtml_function_coverage=1 00:35:00.906 --rc genhtml_legend=1 00:35:00.906 --rc geninfo_all_blocks=1 00:35:00.906 --rc geninfo_unexecuted_blocks=1 00:35:00.906 00:35:00.906 ' 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:00.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.906 --rc genhtml_branch_coverage=1 00:35:00.906 --rc genhtml_function_coverage=1 00:35:00.906 --rc genhtml_legend=1 00:35:00.906 --rc geninfo_all_blocks=1 00:35:00.906 --rc geninfo_unexecuted_blocks=1 00:35:00.906 00:35:00.906 ' 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:00.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.906 --rc genhtml_branch_coverage=1 00:35:00.906 --rc genhtml_function_coverage=1 00:35:00.906 --rc genhtml_legend=1 00:35:00.906 --rc geninfo_all_blocks=1 00:35:00.906 --rc geninfo_unexecuted_blocks=1 00:35:00.906 00:35:00.906 ' 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:00.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.906 --rc genhtml_branch_coverage=1 00:35:00.906 --rc genhtml_function_coverage=1 00:35:00.906 --rc genhtml_legend=1 00:35:00.906 --rc geninfo_all_blocks=1 00:35:00.906 --rc geninfo_unexecuted_blocks=1 00:35:00.906 00:35:00.906 ' 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:35:00.906 11:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:09.053 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:09.053 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:09.053 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:09.053 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:09.053 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:09.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:09.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:35:09.054 00:35:09.054 --- 10.0.0.2 ping statistics --- 00:35:09.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:09.054 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:09.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:09.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:35:09.054 00:35:09.054 --- 10.0.0.1 ping statistics --- 00:35:09.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:09.054 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1254391 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1254391 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1254391 ']' 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:09.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:09.054 11:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:09.054 [2024-11-19 11:02:47.429734] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:09.054 [2024-11-19 11:02:47.430865] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:35:09.054 [2024-11-19 11:02:47.430917] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:09.054 [2024-11-19 11:02:47.532258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.054 [2024-11-19 11:02:47.583621] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:09.054 [2024-11-19 11:02:47.583675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:09.054 [2024-11-19 11:02:47.583684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:09.054 [2024-11-19 11:02:47.583691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:09.054 [2024-11-19 11:02:47.583698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:09.054 [2024-11-19 11:02:47.584475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:09.054 [2024-11-19 11:02:47.661242] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:09.054 [2024-11-19 11:02:47.661532] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:09.054 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:09.054 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:35:09.054 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:09.315 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:09.315 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:09.315 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:09.316 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:09.316 [2024-11-19 11:02:48.445381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:09.316 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:35:09.316 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:09.316 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:09.316 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:09.316 ************************************ 00:35:09.316 START TEST lvs_grow_clean 00:35:09.316 ************************************ 00:35:09.577 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:35:09.577 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:35:09.577 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:35:09.577 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:35:09.577 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:35:09.577 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:35:09.577 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:35:09.577 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:09.577 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:09.577 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:09.577 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:35:09.577 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:35:09.838 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2e24d726-daf8-46cb-9695-d84d4fb4af8e 00:35:09.838 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e24d726-daf8-46cb-9695-d84d4fb4af8e 00:35:09.838 11:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:35:10.100 11:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:35:10.100 11:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:35:10.100 11:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2e24d726-daf8-46cb-9695-d84d4fb4af8e lvol 150 00:35:10.100 11:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a3afb3e3-1db2-4d7c-a431-40c03f9305e4 00:35:10.100 11:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:10.361 11:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:35:10.361 [2024-11-19 11:02:49.461029] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:35:10.361 [2024-11-19 11:02:49.461263] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:35:10.361 true 00:35:10.361 11:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e24d726-daf8-46cb-9695-d84d4fb4af8e 00:35:10.361 11:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:35:10.623 11:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:35:10.623 11:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:10.885 11:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a3afb3e3-1db2-4d7c-a431-40c03f9305e4 00:35:10.885 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:11.161 [2024-11-19 11:02:50.201719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:11.161 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:11.541 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1255076 00:35:11.541 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:11.541 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:35:11.541 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1255076 /var/tmp/bdevperf.sock 00:35:11.541 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1255076 ']' 00:35:11.541 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:11.541 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:11.541 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:11.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:11.541 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:11.541 11:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:35:11.541 [2024-11-19 11:02:50.472515] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:35:11.541 [2024-11-19 11:02:50.472592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255076 ] 00:35:11.541 [2024-11-19 11:02:50.564304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.541 [2024-11-19 11:02:50.616422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.115 11:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:12.115 11:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:35:12.115 11:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:35:12.689 Nvme0n1 00:35:12.689 11:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:35:12.951 [ 00:35:12.951 { 00:35:12.951 "name": "Nvme0n1", 00:35:12.951 "aliases": [ 00:35:12.951 "a3afb3e3-1db2-4d7c-a431-40c03f9305e4" 00:35:12.951 ], 00:35:12.951 "product_name": "NVMe disk", 00:35:12.951 "block_size": 4096, 00:35:12.951 "num_blocks": 38912, 00:35:12.951 "uuid": "a3afb3e3-1db2-4d7c-a431-40c03f9305e4", 00:35:12.951 "numa_id": 0, 00:35:12.951 "assigned_rate_limits": { 00:35:12.951 "rw_ios_per_sec": 0, 00:35:12.951 "rw_mbytes_per_sec": 0, 00:35:12.951 "r_mbytes_per_sec": 0, 00:35:12.951 "w_mbytes_per_sec": 0 00:35:12.951 }, 00:35:12.951 "claimed": false, 00:35:12.951 "zoned": false, 00:35:12.951 "supported_io_types": { 00:35:12.951 "read": true, 00:35:12.951 "write": true, 00:35:12.951 "unmap": true, 00:35:12.951 "flush": true, 00:35:12.951 "reset": true, 00:35:12.951 "nvme_admin": true, 00:35:12.951 "nvme_io": true, 00:35:12.951 "nvme_io_md": false, 00:35:12.951 "write_zeroes": true, 00:35:12.951 "zcopy": false, 00:35:12.951 "get_zone_info": false, 00:35:12.951 "zone_management": false, 00:35:12.951 "zone_append": false, 00:35:12.951 "compare": true, 00:35:12.951 "compare_and_write": true, 00:35:12.951 "abort": true, 00:35:12.951 "seek_hole": false, 00:35:12.951 "seek_data": false, 00:35:12.951 "copy": true, 00:35:12.951 "nvme_iov_md": false 00:35:12.951 }, 00:35:12.951 "memory_domains": [ 00:35:12.951 { 00:35:12.951 "dma_device_id": "system", 00:35:12.951 "dma_device_type": 1 00:35:12.951 } 00:35:12.951 ], 00:35:12.951 "driver_specific": { 00:35:12.951 "nvme": [ 00:35:12.951 { 00:35:12.951 "trid": { 00:35:12.951 "trtype": "TCP", 00:35:12.951 "adrfam": "IPv4", 00:35:12.951 "traddr": "10.0.0.2", 00:35:12.951 "trsvcid": "4420", 00:35:12.951 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:35:12.951 }, 00:35:12.951 "ctrlr_data": { 00:35:12.951 "cntlid": 1, 00:35:12.951 "vendor_id": "0x8086", 00:35:12.951 "model_number": "SPDK bdev Controller", 00:35:12.951 "serial_number": "SPDK0", 00:35:12.951 "firmware_revision": "25.01", 00:35:12.951 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:12.951 "oacs": { 00:35:12.951 "security": 0, 00:35:12.951 "format": 0, 00:35:12.951 "firmware": 0, 00:35:12.951 "ns_manage": 0 00:35:12.951 }, 00:35:12.951 "multi_ctrlr": true, 00:35:12.951 "ana_reporting": false 00:35:12.951 }, 00:35:12.951 "vs": { 00:35:12.951 "nvme_version": "1.3" 00:35:12.951 }, 00:35:12.951 "ns_data": { 00:35:12.951 "id": 1, 00:35:12.951 "can_share": true 00:35:12.951 } 00:35:12.951 } 00:35:12.951 ], 00:35:12.951 "mp_policy": "active_passive" 00:35:12.951 } 00:35:12.951 } 00:35:12.951 ] 00:35:12.951 11:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1255420 00:35:12.951 11:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:35:12.951 11:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:12.951 Running I/O for 10 seconds... 00:35:13.895 Latency(us) 00:35:13.895 [2024-11-19T10:02:53.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:13.895 Nvme0n1 : 1.00 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:35:13.895 [2024-11-19T10:02:53.090Z] =================================================================================================================== 00:35:13.895 [2024-11-19T10:02:53.090Z] Total : 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:35:13.895 00:35:14.838 11:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2e24d726-daf8-46cb-9695-d84d4fb4af8e 00:35:14.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:14.838 Nvme0n1 : 2.00 17018.00 66.48 0.00 0.00 0.00 0.00 0.00 00:35:14.838 [2024-11-19T10:02:54.033Z] =================================================================================================================== 00:35:14.838 [2024-11-19T10:02:54.033Z] Total : 17018.00 66.48 0.00 0.00 0.00 0.00 0.00 00:35:14.838 00:35:15.099 true 00:35:15.099 11:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e24d726-daf8-46cb-9695-d84d4fb4af8e 00:35:15.099 11:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:35:15.360 11:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:35:15.360 11:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:35:15.360 11:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1255420 00:35:15.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:15.932 Nvme0n1 : 3.00 17314.33 67.63 0.00 0.00 0.00 0.00 0.00 00:35:15.932 [2024-11-19T10:02:55.127Z] =================================================================================================================== 00:35:15.932 [2024-11-19T10:02:55.127Z] Total : 17314.33 67.63 0.00 0.00 0.00 0.00 0.00 00:35:15.932 00:35:16.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:16.874 Nvme0n1 : 4.00 17970.50 70.20 0.00 0.00 0.00 0.00 0.00 00:35:16.874 [2024-11-19T10:02:56.069Z] =================================================================================================================== 00:35:16.874 [2024-11-19T10:02:56.069Z] Total : 17970.50 70.20 0.00 0.00 0.00 0.00 0.00 00:35:16.874 00:35:17.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:17.816 Nvme0n1 : 5.00 19456.40 76.00 0.00 0.00 0.00 0.00 0.00 00:35:17.816 [2024-11-19T10:02:57.011Z] =================================================================================================================== 00:35:17.816 [2024-11-19T10:02:57.011Z] Total : 19456.40 76.00 0.00 0.00 0.00 0.00 0.00 00:35:17.816 00:35:19.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:19.241 Nvme0n1 : 6.00 20447.00 79.87 0.00 0.00 0.00 0.00 0.00 00:35:19.241 [2024-11-19T10:02:58.436Z] =================================================================================================================== 00:35:19.241 [2024-11-19T10:02:58.436Z] Total : 20447.00 79.87 0.00 0.00 0.00 0.00 0.00 00:35:19.241 00:35:20.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:20.184 Nvme0n1 : 7.00 21154.57 82.64 0.00 0.00 0.00 0.00 0.00 00:35:20.184 [2024-11-19T10:02:59.379Z] =================================================================================================================== 00:35:20.184 [2024-11-19T10:02:59.379Z] Total : 21154.57 82.64 0.00 0.00 0.00 0.00 0.00 00:35:20.184 00:35:21.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:21.123 Nvme0n1 : 8.00 21701.12 84.77 0.00 0.00 0.00 0.00 0.00 00:35:21.123 [2024-11-19T10:03:00.318Z] =================================================================================================================== 00:35:21.123 [2024-11-19T10:03:00.318Z] Total : 21701.12 84.77 0.00 0.00 0.00 0.00 0.00 00:35:21.123 00:35:22.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:22.065 Nvme0n1 : 9.00 22112.11 86.38 0.00 0.00 0.00 0.00 0.00 00:35:22.065 [2024-11-19T10:03:01.260Z] =================================================================================================================== 00:35:22.065 [2024-11-19T10:03:01.260Z] Total : 22112.11 86.38 0.00 0.00 0.00 0.00 0.00 00:35:22.065 00:35:23.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:23.007 Nvme0n1 : 10.00 22440.90 87.66 0.00 0.00 0.00 0.00 0.00 00:35:23.007 [2024-11-19T10:03:02.202Z] =================================================================================================================== 00:35:23.007 [2024-11-19T10:03:02.202Z] Total : 22440.90 87.66 0.00 0.00 0.00 0.00 0.00 00:35:23.007 00:35:23.007 00:35:23.007 Latency(us) 00:35:23.007 [2024-11-19T10:03:02.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:23.007 Nvme0n1 : 10.01 22442.19 87.66 0.00 0.00 5700.31 4423.68 32112.64 00:35:23.007 [2024-11-19T10:03:02.202Z] =================================================================================================================== 00:35:23.007 [2024-11-19T10:03:02.202Z] Total : 22442.19 87.66 0.00 0.00 5700.31 4423.68 32112.64 00:35:23.007 { 00:35:23.007 "results": [ 00:35:23.007 { 00:35:23.007 "job": "Nvme0n1", 00:35:23.007 "core_mask": "0x2", 00:35:23.007 "workload": "randwrite", 00:35:23.007 "status": "finished", 00:35:23.007 "queue_depth": 128, 00:35:23.007 "io_size": 4096, 00:35:23.007 "runtime": 10.00513, 00:35:23.007 "iops": 22442.18715798795, 00:35:23.007 "mibps": 87.66479358589044, 00:35:23.007 "io_failed": 0, 00:35:23.007 "io_timeout": 0, 00:35:23.007 "avg_latency_us": 5700.308167176605, 00:35:23.007 "min_latency_us": 4423.68, 00:35:23.007 "max_latency_us": 32112.64 00:35:23.007 } 00:35:23.007 ], 00:35:23.007 "core_count": 1 00:35:23.007 } 00:35:23.007 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1255076 00:35:23.007 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1255076 ']' 00:35:23.007 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1255076 00:35:23.007 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:35:23.007 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:23.007 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1255076 00:35:23.007 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:23.007 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:23.007 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1255076' 00:35:23.007 killing process with pid 1255076 00:35:23.007 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1255076 00:35:23.007 Received shutdown signal, test time was about 10.000000 seconds 00:35:23.007 00:35:23.007 Latency(us) 00:35:23.007 [2024-11-19T10:03:02.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.007 [2024-11-19T10:03:02.202Z] =================================================================================================================== 00:35:23.007 [2024-11-19T10:03:02.202Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:23.007 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1255076 00:35:23.268 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:23.268 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:23.529 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e24d726-daf8-46cb-9695-d84d4fb4af8e 00:35:23.529 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:35:23.791 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:35:23.791 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:35:23.791 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:23.791 [2024-11-19 11:03:02.925100] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:35:23.791 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e24d726-daf8-46cb-9695-d84d4fb4af8e 00:35:23.791 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:35:23.791 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e24d726-daf8-46cb-9695-d84d4fb4af8e 00:35:23.791 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:23.791 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:23.791 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:23.791 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:23.791 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:23.791 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:23.791 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:23.791 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:35:23.791 11:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e24d726-daf8-46cb-9695-d84d4fb4af8e 00:35:24.051 request: 00:35:24.051 { 00:35:24.051 "uuid": "2e24d726-daf8-46cb-9695-d84d4fb4af8e", 00:35:24.051 "method": "bdev_lvol_get_lvstores", 00:35:24.051 "req_id": 1 00:35:24.051 } 00:35:24.051 Got JSON-RPC error response 00:35:24.051 response: 00:35:24.051 { 00:35:24.051 "code": -19, 00:35:24.051 "message": "No such device" 00:35:24.051 } 00:35:24.051 11:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:35:24.051 11:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:24.051 11:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:24.051 11:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:24.051 11:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:24.312 aio_bdev 00:35:24.312 11:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a3afb3e3-1db2-4d7c-a431-40c03f9305e4 00:35:24.312 11:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a3afb3e3-1db2-4d7c-a431-40c03f9305e4 00:35:24.312 11:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:24.312 11:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:35:24.312 11:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:24.312 11:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:24.312 11:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:24.573 11:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a3afb3e3-1db2-4d7c-a431-40c03f9305e4 -t 2000 00:35:24.573 [ 00:35:24.573 { 00:35:24.573 "name": "a3afb3e3-1db2-4d7c-a431-40c03f9305e4", 00:35:24.573 "aliases": [ 00:35:24.573 "lvs/lvol" 00:35:24.573 ], 00:35:24.573 "product_name": "Logical Volume", 00:35:24.573 "block_size": 4096, 00:35:24.573 "num_blocks": 38912, 00:35:24.573 "uuid": "a3afb3e3-1db2-4d7c-a431-40c03f9305e4", 00:35:24.573 "assigned_rate_limits": { 00:35:24.573 "rw_ios_per_sec": 0, 00:35:24.573 "rw_mbytes_per_sec": 0, 00:35:24.573 "r_mbytes_per_sec": 0, 00:35:24.573 "w_mbytes_per_sec": 0 00:35:24.573 }, 00:35:24.573 "claimed": false, 00:35:24.573 "zoned": false, 00:35:24.573 "supported_io_types": { 00:35:24.573 "read": true, 00:35:24.573 "write": true, 00:35:24.573 "unmap": true, 00:35:24.573 "flush": false, 00:35:24.573 "reset": true, 00:35:24.573 "nvme_admin": false, 00:35:24.573 "nvme_io": false, 00:35:24.573 "nvme_io_md": false, 00:35:24.573 "write_zeroes": true, 00:35:24.573 "zcopy": false, 00:35:24.573 "get_zone_info": false, 00:35:24.573 "zone_management": false, 00:35:24.573 "zone_append": false, 00:35:24.573 "compare": false, 00:35:24.573 "compare_and_write": false, 00:35:24.573 "abort": false, 00:35:24.573 "seek_hole": true, 00:35:24.573 "seek_data": true, 00:35:24.573 "copy": false, 00:35:24.573 "nvme_iov_md": false 00:35:24.573 }, 00:35:24.573 "driver_specific": { 00:35:24.573 "lvol": { 00:35:24.573 "lvol_store_uuid": "2e24d726-daf8-46cb-9695-d84d4fb4af8e", 00:35:24.573 "base_bdev": "aio_bdev", 00:35:24.573 "thin_provision": false, 00:35:24.573 "num_allocated_clusters": 38, 00:35:24.573 "snapshot": false, 00:35:24.573 "clone": false, 00:35:24.573 "esnap_clone": false 00:35:24.573 } 00:35:24.573 } 00:35:24.573 } 00:35:24.573 ] 00:35:24.573 11:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:35:24.573 11:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e24d726-daf8-46cb-9695-d84d4fb4af8e 00:35:24.573 11:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:35:24.834 11:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:35:24.834 11:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2e24d726-daf8-46cb-9695-d84d4fb4af8e 00:35:24.834 11:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:35:25.096 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:35:25.096 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a3afb3e3-1db2-4d7c-a431-40c03f9305e4 00:35:25.096 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2e24d726-daf8-46cb-9695-d84d4fb4af8e 00:35:25.356 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:25.618 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:25.618 00:35:25.618 real 0m16.169s 00:35:25.618 user 0m15.781s 00:35:25.618 sys 0m1.543s 00:35:25.618 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:25.618 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:35:25.618 ************************************ 00:35:25.618 END TEST lvs_grow_clean 00:35:25.618 ************************************ 00:35:25.618 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:35:25.618 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:25.618 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:25.618 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:25.618 ************************************ 00:35:25.618 START TEST lvs_grow_dirty 00:35:25.618 ************************************ 00:35:25.618 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:35:25.618 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:35:25.618 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:35:25.618 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:35:25.619 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:35:25.619 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:35:25.619 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:35:25.619 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:25.619 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:25.619 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:25.881 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:35:25.881 11:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:35:26.142 11:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7d4d0227-099c-4472-ba0f-de0e98a4f8a7 00:35:26.142 11:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d4d0227-099c-4472-ba0f-de0e98a4f8a7 00:35:26.142 11:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:35:26.404 11:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:35:26.404 11:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:35:26.404 11:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7d4d0227-099c-4472-ba0f-de0e98a4f8a7 lvol 150 00:35:26.404 11:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=15d503ad-0715-4fc4-aab7-39d8d4835240 00:35:26.404 11:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:26.404 11:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:35:26.665 [2024-11-19 11:03:05.721023] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:35:26.665 [2024-11-19 11:03:05.721234] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:35:26.665 true 00:35:26.665 11:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d4d0227-099c-4472-ba0f-de0e98a4f8a7 00:35:26.665 11:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:35:26.925 11:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:35:26.925 11:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:26.925 11:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 15d503ad-0715-4fc4-aab7-39d8d4835240 00:35:27.186 11:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:27.446 [2024-11-19 11:03:06.449622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:27.446 11:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:27.707 11:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1258261 00:35:27.707 11:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:27.707 11:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:35:27.707 11:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1258261 /var/tmp/bdevperf.sock 00:35:27.707 11:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1258261 ']' 00:35:27.707 11:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:27.707 11:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:27.707 11:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:27.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:27.707 11:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:27.707 11:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:27.707 [2024-11-19 11:03:06.704493] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:35:27.707 [2024-11-19 11:03:06.704563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258261 ] 00:35:27.707 [2024-11-19 11:03:06.791203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.707 [2024-11-19 11:03:06.825619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:28.650 11:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:28.650 11:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:35:28.650 11:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:35:28.650 Nvme0n1 00:35:28.650 11:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:35:28.911 [ 00:35:28.911 { 00:35:28.911 "name": "Nvme0n1", 00:35:28.911 "aliases": [ 00:35:28.911 "15d503ad-0715-4fc4-aab7-39d8d4835240" 00:35:28.911 ], 00:35:28.911 "product_name": "NVMe disk", 00:35:28.911 "block_size": 4096, 00:35:28.911 "num_blocks": 38912, 00:35:28.911 "uuid": "15d503ad-0715-4fc4-aab7-39d8d4835240", 00:35:28.911 "numa_id": 0, 00:35:28.911 "assigned_rate_limits": { 00:35:28.911 "rw_ios_per_sec": 0, 00:35:28.911 "rw_mbytes_per_sec": 0, 00:35:28.911 "r_mbytes_per_sec": 0, 00:35:28.911 "w_mbytes_per_sec": 0 00:35:28.911 }, 00:35:28.911 "claimed": false, 00:35:28.911 "zoned": false, 00:35:28.911 "supported_io_types": { 00:35:28.911 "read": true, 00:35:28.911 "write": true, 00:35:28.911 "unmap": true, 00:35:28.911 "flush": true, 00:35:28.911 "reset": true, 00:35:28.911 "nvme_admin": true, 00:35:28.911 "nvme_io": true, 00:35:28.911 "nvme_io_md": false, 00:35:28.911 "write_zeroes": true, 00:35:28.911 "zcopy": false, 00:35:28.911 "get_zone_info": false, 00:35:28.911 "zone_management": false, 00:35:28.911 "zone_append": false, 00:35:28.911 "compare": true, 00:35:28.911 "compare_and_write": true, 00:35:28.911 "abort": true, 00:35:28.911 "seek_hole": false, 00:35:28.911 "seek_data": false, 00:35:28.911 "copy": true, 00:35:28.911 "nvme_iov_md": false 00:35:28.911 }, 00:35:28.911 "memory_domains": [ 00:35:28.911 { 00:35:28.911 "dma_device_id": "system", 00:35:28.911 "dma_device_type": 1 00:35:28.911 } 00:35:28.911 ], 00:35:28.911 "driver_specific": { 00:35:28.911 "nvme": [ 00:35:28.911 { 00:35:28.911 "trid": { 00:35:28.911 "trtype": "TCP", 00:35:28.911 "adrfam": "IPv4", 00:35:28.911 "traddr": "10.0.0.2", 00:35:28.911 "trsvcid": "4420", 00:35:28.911 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:35:28.911 }, 00:35:28.911 "ctrlr_data": { 00:35:28.911 "cntlid": 1, 00:35:28.911 "vendor_id": "0x8086", 00:35:28.911 "model_number": "SPDK bdev Controller", 00:35:28.911 "serial_number": "SPDK0", 00:35:28.911 "firmware_revision": "25.01", 00:35:28.911 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:28.911 "oacs": { 00:35:28.911 "security": 0, 00:35:28.911 "format": 0, 00:35:28.911 "firmware": 0, 00:35:28.911 "ns_manage": 0 00:35:28.911 }, 00:35:28.911 "multi_ctrlr": true, 00:35:28.911 "ana_reporting": false 00:35:28.911 }, 00:35:28.911 "vs": { 00:35:28.911 "nvme_version": "1.3" 00:35:28.911 }, 00:35:28.911 "ns_data": { 00:35:28.911 "id": 1, 00:35:28.912 "can_share": true 00:35:28.912 } 00:35:28.912 } 00:35:28.912 ], 00:35:28.912 "mp_policy": "active_passive" 00:35:28.912 } 00:35:28.912 } 00:35:28.912 ] 00:35:28.912 11:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1258541 00:35:28.912 11:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:35:28.912 11:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:28.912 Running I/O for 10 seconds... 00:35:29.854 Latency(us) 00:35:29.854 [2024-11-19T10:03:09.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:29.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:29.854 Nvme0n1 : 1.00 17536.00 68.50 0.00 0.00 0.00 0.00 0.00 00:35:29.854 [2024-11-19T10:03:09.049Z] =================================================================================================================== 00:35:29.854 [2024-11-19T10:03:09.049Z] Total : 17536.00 68.50 0.00 0.00 0.00 0.00 0.00 00:35:29.854 00:35:30.796 11:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7d4d0227-099c-4472-ba0f-de0e98a4f8a7 00:35:30.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:30.796 Nvme0n1 : 2.00 17785.00 69.47 0.00 0.00 0.00 0.00 0.00 00:35:30.796 [2024-11-19T10:03:09.991Z] =================================================================================================================== 00:35:30.796 [2024-11-19T10:03:09.991Z] Total : 17785.00 69.47 0.00 0.00 0.00 0.00 0.00 00:35:30.796 00:35:31.056 true 00:35:31.056 11:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d4d0227-099c-4472-ba0f-de0e98a4f8a7 00:35:31.056 11:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:35:31.056 11:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:35:31.056 11:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:35:31.056 11:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1258541 00:35:31.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:31.997 Nvme0n1 : 3.00 17825.67 69.63 0.00 0.00 0.00 0.00 0.00 00:35:31.997 [2024-11-19T10:03:11.192Z] =================================================================================================================== 00:35:31.997 [2024-11-19T10:03:11.192Z] Total : 17825.67 69.63 0.00 0.00 0.00 0.00 0.00 00:35:31.997 00:35:32.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:32.939 Nvme0n1 : 4.00 17909.50 69.96 0.00 0.00 0.00 0.00 0.00 00:35:32.939 [2024-11-19T10:03:12.134Z] =================================================================================================================== 00:35:32.939 [2024-11-19T10:03:12.134Z] Total : 17909.50 69.96 0.00 0.00 0.00 0.00 0.00 00:35:32.939 00:35:33.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:33.879 Nvme0n1 : 5.00 18772.60 73.33 0.00 0.00 0.00 0.00 0.00 00:35:33.879 [2024-11-19T10:03:13.074Z] =================================================================================================================== 00:35:33.879 [2024-11-19T10:03:13.074Z] Total : 18772.60 73.33 0.00 0.00 0.00 0.00 0.00 00:35:33.879 00:35:34.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:34.819 Nvme0n1 : 6.00 19909.17 77.77 0.00 0.00 0.00 0.00 0.00 00:35:34.819 [2024-11-19T10:03:14.014Z] =================================================================================================================== 00:35:34.819 [2024-11-19T10:03:14.014Z] Total : 19909.17 77.77 0.00 0.00 0.00 0.00 0.00 00:35:34.819 00:35:36.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:36.205 Nvme0n1 : 7.00 20711.71 80.91 0.00 0.00 0.00 0.00 0.00 00:35:36.205 [2024-11-19T10:03:15.400Z] =================================================================================================================== 00:35:36.205 [2024-11-19T10:03:15.400Z] Total : 20711.71 80.91 0.00 0.00 0.00 0.00 0.00 00:35:36.205 00:35:36.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:36.779 Nvme0n1 : 8.00 21313.62 83.26 0.00 0.00 0.00 0.00 0.00 00:35:36.779 [2024-11-19T10:03:15.974Z] =================================================================================================================== 00:35:36.779 [2024-11-19T10:03:15.974Z] Total : 21313.62 83.26 0.00 0.00 0.00 0.00 0.00 00:35:36.779 00:35:38.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:38.165 Nvme0n1 : 9.00 21783.67 85.09 0.00 0.00 0.00 0.00 0.00 00:35:38.165 [2024-11-19T10:03:17.360Z] =================================================================================================================== 00:35:38.165 [2024-11-19T10:03:17.360Z] Total : 21783.67 85.09 0.00 0.00 0.00 0.00 0.00 00:35:38.165 00:35:39.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:39.109 Nvme0n1 : 10.00 22164.40 86.58 0.00 0.00 0.00 0.00 0.00 00:35:39.109 [2024-11-19T10:03:18.304Z] =================================================================================================================== 00:35:39.109 [2024-11-19T10:03:18.304Z] Total : 22164.40 86.58 0.00 0.00 0.00 0.00 0.00 00:35:39.109 00:35:39.109 00:35:39.109 Latency(us) 00:35:39.109 [2024-11-19T10:03:18.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:39.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:39.109 Nvme0n1 : 10.00 22169.43 86.60 0.00 0.00 5770.97 2553.17 28835.84 00:35:39.109 [2024-11-19T10:03:18.304Z] =================================================================================================================== 00:35:39.109 [2024-11-19T10:03:18.304Z] Total : 22169.43 86.60 0.00 0.00 5770.97 2553.17 28835.84 00:35:39.109 { 00:35:39.109 "results": [ 00:35:39.109 { 00:35:39.109 "job": "Nvme0n1", 00:35:39.109 "core_mask": "0x2", 00:35:39.109 "workload": "randwrite", 00:35:39.109 "status": "finished", 00:35:39.109 "queue_depth": 128, 00:35:39.109 "io_size": 4096, 00:35:39.109 "runtime": 10.003507, 00:35:39.109 "iops": 22169.425182588468, 00:35:39.109 "mibps": 86.5993171194862, 00:35:39.109 "io_failed": 0, 00:35:39.109 "io_timeout": 0, 00:35:39.109 "avg_latency_us": 5770.971142735181, 00:35:39.109 "min_latency_us": 2553.173333333333, 00:35:39.109 "max_latency_us": 28835.84 00:35:39.109 } 00:35:39.109 ], 00:35:39.109 "core_count": 1 00:35:39.109 } 00:35:39.109 11:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1258261 00:35:39.109 11:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1258261 ']' 00:35:39.109 11:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1258261 00:35:39.109 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:35:39.109 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:39.109 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1258261 00:35:39.109 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:39.109 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:39.109 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1258261' 00:35:39.109 killing process with pid 1258261 00:35:39.109 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1258261 00:35:39.109 Received shutdown signal, test time was about 10.000000 seconds 00:35:39.109 00:35:39.109 Latency(us) 00:35:39.109 [2024-11-19T10:03:18.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:39.109 [2024-11-19T10:03:18.304Z] =================================================================================================================== 00:35:39.109 [2024-11-19T10:03:18.304Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:39.109 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1258261 00:35:39.109 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:39.372 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:39.372 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d4d0227-099c-4472-ba0f-de0e98a4f8a7 00:35:39.372 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:35:39.633 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:35:39.633 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:35:39.633 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1254391 00:35:39.633 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1254391 00:35:39.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1254391 Killed "${NVMF_APP[@]}" "$@" 00:35:39.633 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:35:39.633 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:35:39.633 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:39.633 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:39.633 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:39.633 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1261071 00:35:39.633 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1261071 00:35:39.633 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:35:39.633 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1261071 ']' 00:35:39.633 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:39.633 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:39.633 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:39.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:39.633 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:39.633 11:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:39.633 [2024-11-19 11:03:18.820659] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:39.633 [2024-11-19 11:03:18.821663] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:35:39.633 [2024-11-19 11:03:18.821708] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:39.894 [2024-11-19 11:03:18.913391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.894 [2024-11-19 11:03:18.944972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:39.894 [2024-11-19 11:03:18.945002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:39.894 [2024-11-19 11:03:18.945007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:39.894 [2024-11-19 11:03:18.945012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:39.894 [2024-11-19 11:03:18.945016] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:39.894 [2024-11-19 11:03:18.945483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:39.894 [2024-11-19 11:03:18.996438] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:39.894 [2024-11-19 11:03:18.996628] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:40.478 11:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:40.478 11:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:35:40.478 11:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:40.478 11:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:40.478 11:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:40.478 11:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:40.479 11:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:40.740 [2024-11-19 11:03:19.827900] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:35:40.740 [2024-11-19 11:03:19.828152] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:35:40.740 [2024-11-19 11:03:19.828267] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:35:40.740 11:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:35:40.740 11:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 15d503ad-0715-4fc4-aab7-39d8d4835240 00:35:40.740 11:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=15d503ad-0715-4fc4-aab7-39d8d4835240 00:35:40.740 11:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:40.740 11:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:35:40.740 11:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:40.740 11:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:40.740 11:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:41.001 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 15d503ad-0715-4fc4-aab7-39d8d4835240 -t 2000 00:35:41.262 [ 00:35:41.262 { 00:35:41.262 "name": "15d503ad-0715-4fc4-aab7-39d8d4835240", 00:35:41.262 "aliases": [ 00:35:41.262 "lvs/lvol" 00:35:41.262 ], 00:35:41.262 "product_name": "Logical Volume", 00:35:41.262 "block_size": 4096, 00:35:41.262 "num_blocks": 38912, 00:35:41.262 "uuid": "15d503ad-0715-4fc4-aab7-39d8d4835240", 00:35:41.262 "assigned_rate_limits": { 00:35:41.262 "rw_ios_per_sec": 0, 00:35:41.262 "rw_mbytes_per_sec": 0, 00:35:41.262 "r_mbytes_per_sec": 0, 00:35:41.262 "w_mbytes_per_sec": 0 00:35:41.262 }, 00:35:41.262 "claimed": false, 00:35:41.262 "zoned": false, 00:35:41.262 "supported_io_types": { 00:35:41.262 "read": true, 00:35:41.262 "write": true, 00:35:41.262 "unmap": true, 00:35:41.262 "flush": false, 00:35:41.262 "reset": true, 00:35:41.262 "nvme_admin": false, 00:35:41.262 "nvme_io": false, 00:35:41.262 "nvme_io_md": false, 00:35:41.262 "write_zeroes": true, 00:35:41.262 "zcopy": false, 00:35:41.262 "get_zone_info": false, 00:35:41.262 "zone_management": false, 00:35:41.262 "zone_append": false, 00:35:41.262 "compare": false, 00:35:41.262 "compare_and_write": false, 00:35:41.262 "abort": false, 00:35:41.262 "seek_hole": true, 00:35:41.262 "seek_data": true, 00:35:41.262 "copy": false, 00:35:41.262 "nvme_iov_md": false 00:35:41.262 }, 00:35:41.262 "driver_specific": { 00:35:41.262 "lvol": { 00:35:41.263 "lvol_store_uuid": "7d4d0227-099c-4472-ba0f-de0e98a4f8a7", 00:35:41.263 "base_bdev": "aio_bdev", 00:35:41.263 "thin_provision": false, 00:35:41.263 "num_allocated_clusters": 38, 00:35:41.263 "snapshot": false, 00:35:41.263 "clone": false, 00:35:41.263 "esnap_clone": false 00:35:41.263 } 00:35:41.263 } 00:35:41.263 } 00:35:41.263 ] 00:35:41.263 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:35:41.263 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d4d0227-099c-4472-ba0f-de0e98a4f8a7 00:35:41.263 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:35:41.263 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:35:41.263 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d4d0227-099c-4472-ba0f-de0e98a4f8a7 00:35:41.263 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:35:41.524 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:35:41.524 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:41.785 [2024-11-19 11:03:20.729954] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:35:41.785 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d4d0227-099c-4472-ba0f-de0e98a4f8a7 00:35:41.785 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:35:41.785 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d4d0227-099c-4472-ba0f-de0e98a4f8a7 00:35:41.785 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:41.785 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:41.785 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:41.785 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:41.785 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:41.785 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:41.785 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:41.785 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:35:41.785 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d4d0227-099c-4472-ba0f-de0e98a4f8a7 00:35:41.785 request: 00:35:41.785 { 00:35:41.785 "uuid": "7d4d0227-099c-4472-ba0f-de0e98a4f8a7", 00:35:41.785 "method": "bdev_lvol_get_lvstores", 00:35:41.785 "req_id": 1 00:35:41.785 } 00:35:41.785 Got JSON-RPC error response 00:35:41.785 response: 00:35:41.786 { 00:35:41.786 "code": -19, 00:35:41.786 "message": "No such device" 00:35:41.786 } 00:35:41.786 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:35:41.786 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:41.786 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:41.786 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:41.786 11:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:42.046 aio_bdev 00:35:42.046 11:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 15d503ad-0715-4fc4-aab7-39d8d4835240 00:35:42.046 11:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=15d503ad-0715-4fc4-aab7-39d8d4835240 00:35:42.046 11:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:42.046 11:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:35:42.046 11:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:42.046 11:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:42.046 11:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:42.307 11:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 15d503ad-0715-4fc4-aab7-39d8d4835240 -t 2000 00:35:42.307 [ 00:35:42.307 { 00:35:42.307 "name": "15d503ad-0715-4fc4-aab7-39d8d4835240", 00:35:42.307 "aliases": [ 00:35:42.307 "lvs/lvol" 00:35:42.307 ], 00:35:42.307 "product_name": "Logical Volume", 00:35:42.307 "block_size": 4096, 00:35:42.307 "num_blocks": 38912, 00:35:42.307 "uuid": "15d503ad-0715-4fc4-aab7-39d8d4835240", 00:35:42.307 "assigned_rate_limits": { 00:35:42.307 "rw_ios_per_sec": 0, 00:35:42.307 "rw_mbytes_per_sec": 0, 00:35:42.307 "r_mbytes_per_sec": 0, 00:35:42.307 "w_mbytes_per_sec": 0 00:35:42.307 }, 00:35:42.307 "claimed": false, 00:35:42.307 "zoned": false, 00:35:42.307 "supported_io_types": { 00:35:42.307 "read": true, 00:35:42.307 "write": true, 00:35:42.307 "unmap": true, 00:35:42.307 "flush": false, 00:35:42.307 "reset": true, 00:35:42.307 "nvme_admin": false, 00:35:42.307 "nvme_io": false, 00:35:42.307 "nvme_io_md": false, 00:35:42.307 "write_zeroes": true, 00:35:42.307 "zcopy": false, 00:35:42.307 "get_zone_info": false, 00:35:42.307 "zone_management": false, 00:35:42.307 "zone_append": false, 00:35:42.307 "compare": false, 00:35:42.307 "compare_and_write": false, 00:35:42.307 "abort": false, 00:35:42.307 "seek_hole": true, 00:35:42.307 "seek_data": true, 00:35:42.307 "copy": false, 00:35:42.307 "nvme_iov_md": false 00:35:42.307 }, 00:35:42.307 "driver_specific": { 00:35:42.307 "lvol": { 00:35:42.307 "lvol_store_uuid": "7d4d0227-099c-4472-ba0f-de0e98a4f8a7", 00:35:42.307 "base_bdev": "aio_bdev", 00:35:42.307 "thin_provision": false, 00:35:42.307 "num_allocated_clusters": 38, 00:35:42.307 "snapshot": false, 00:35:42.307 "clone": false, 00:35:42.307 "esnap_clone": false 00:35:42.307 } 00:35:42.307 } 00:35:42.307 } 00:35:42.307 ] 00:35:42.308 11:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:35:42.308 11:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:35:42.308 11:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d4d0227-099c-4472-ba0f-de0e98a4f8a7 00:35:42.569 11:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:35:42.569 11:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d4d0227-099c-4472-ba0f-de0e98a4f8a7 00:35:42.569 11:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:35:42.830 11:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:35:42.830 11:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 15d503ad-0715-4fc4-aab7-39d8d4835240 00:35:42.830 11:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7d4d0227-099c-4472-ba0f-de0e98a4f8a7 00:35:43.091 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:43.352 00:35:43.352 real 0m17.602s 00:35:43.352 user 0m35.491s 00:35:43.352 sys 0m3.093s 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:43.352 ************************************ 00:35:43.352 END TEST lvs_grow_dirty 00:35:43.352 ************************************ 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:35:43.352 nvmf_trace.0 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:43.352 rmmod nvme_tcp 00:35:43.352 rmmod nvme_fabrics 00:35:43.352 rmmod nvme_keyring 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1261071 ']' 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1261071 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1261071 ']' 00:35:43.352 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1261071 00:35:43.353 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:35:43.353 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:43.353 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1261071 00:35:43.614 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:43.614 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:43.614 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1261071' 00:35:43.614 killing process with pid 1261071 00:35:43.614 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1261071 00:35:43.614 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1261071 00:35:43.614 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:43.614 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:43.614 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:43.614 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:35:43.614 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:35:43.614 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:43.614 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:35:43.614 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:43.614 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:43.614 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.614 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:43.614 11:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:46.163 11:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:46.163 00:35:46.163 real 0m45.168s 00:35:46.163 user 0m54.322s 00:35:46.163 sys 0m10.724s 00:35:46.163 11:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:46.163 11:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:46.163 ************************************ 00:35:46.163 END TEST nvmf_lvs_grow 00:35:46.163 ************************************ 00:35:46.163 11:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:35:46.163 11:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:46.163 11:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:46.163 11:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:46.163 ************************************ 00:35:46.163 START TEST nvmf_bdev_io_wait 00:35:46.163 ************************************ 00:35:46.163 11:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:35:46.163 * Looking for test storage... 00:35:46.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:35:46.163 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:46.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.164 --rc genhtml_branch_coverage=1 00:35:46.164 --rc genhtml_function_coverage=1 00:35:46.164 --rc genhtml_legend=1 00:35:46.164 --rc geninfo_all_blocks=1 00:35:46.164 --rc geninfo_unexecuted_blocks=1 00:35:46.164 00:35:46.164 ' 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:46.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.164 --rc genhtml_branch_coverage=1 00:35:46.164 --rc genhtml_function_coverage=1 00:35:46.164 --rc genhtml_legend=1 00:35:46.164 --rc geninfo_all_blocks=1 00:35:46.164 --rc geninfo_unexecuted_blocks=1 00:35:46.164 00:35:46.164 ' 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:46.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.164 --rc genhtml_branch_coverage=1 00:35:46.164 --rc genhtml_function_coverage=1 00:35:46.164 --rc genhtml_legend=1 00:35:46.164 --rc geninfo_all_blocks=1 00:35:46.164 --rc geninfo_unexecuted_blocks=1 00:35:46.164 00:35:46.164 ' 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:46.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.164 --rc genhtml_branch_coverage=1 00:35:46.164 --rc genhtml_function_coverage=1 00:35:46.164 --rc genhtml_legend=1 00:35:46.164 --rc geninfo_all_blocks=1 00:35:46.164 --rc geninfo_unexecuted_blocks=1 00:35:46.164 00:35:46.164 ' 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:35:46.164 11:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:54.307 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:54.308 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:54.308 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:54.308 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:54.308 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:54.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:54.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:35:54.308 00:35:54.308 --- 10.0.0.2 ping statistics --- 00:35:54.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:54.308 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:54.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:54.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:35:54.308 00:35:54.308 --- 10.0.0.1 ping statistics --- 00:35:54.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:54.308 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1265821 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1265821 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:35:54.308 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1265821 ']' 00:35:54.309 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:54.309 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:54.309 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:54.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:54.309 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:54.309 11:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:54.309 [2024-11-19 11:03:32.696701] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:54.309 [2024-11-19 11:03:32.697823] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:35:54.309 [2024-11-19 11:03:32.697874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:54.309 [2024-11-19 11:03:32.797004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:54.309 [2024-11-19 11:03:32.852055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:54.309 [2024-11-19 11:03:32.852104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:54.309 [2024-11-19 11:03:32.852113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:54.309 [2024-11-19 11:03:32.852121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:54.309 [2024-11-19 11:03:32.852128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:54.309 [2024-11-19 11:03:32.853980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:54.309 [2024-11-19 11:03:32.854123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:54.309 [2024-11-19 11:03:32.854287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:54.309 [2024-11-19 11:03:32.854287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:54.309 [2024-11-19 11:03:32.854964] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:54.570 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:54.570 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:35:54.570 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:54.570 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:54.570 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:54.570 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:54.570 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:35:54.570 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.570 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:54.570 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.570 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:35:54.570 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:54.571 [2024-11-19 11:03:33.626663] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:54.571 [2024-11-19 11:03:33.627417] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:54.571 [2024-11-19 11:03:33.627509] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:54.571 [2024-11-19 11:03:33.627687] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:54.571 [2024-11-19 11:03:33.639204] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:54.571 Malloc0 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:54.571 [2024-11-19 11:03:33.711692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1266156 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1266158 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:54.571 { 00:35:54.571 "params": { 00:35:54.571 "name": "Nvme$subsystem", 00:35:54.571 "trtype": "$TEST_TRANSPORT", 00:35:54.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:54.571 "adrfam": "ipv4", 00:35:54.571 "trsvcid": "$NVMF_PORT", 00:35:54.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:54.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:54.571 "hdgst": ${hdgst:-false}, 00:35:54.571 "ddgst": ${ddgst:-false} 00:35:54.571 }, 00:35:54.571 "method": "bdev_nvme_attach_controller" 00:35:54.571 } 00:35:54.571 EOF 00:35:54.571 )") 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1266160 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:54.571 { 00:35:54.571 "params": { 00:35:54.571 "name": "Nvme$subsystem", 00:35:54.571 "trtype": "$TEST_TRANSPORT", 00:35:54.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:54.571 "adrfam": "ipv4", 00:35:54.571 "trsvcid": "$NVMF_PORT", 00:35:54.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:54.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:54.571 "hdgst": ${hdgst:-false}, 00:35:54.571 "ddgst": ${ddgst:-false} 00:35:54.571 }, 00:35:54.571 "method": "bdev_nvme_attach_controller" 00:35:54.571 } 00:35:54.571 EOF 00:35:54.571 )") 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1266163 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:54.571 { 00:35:54.571 "params": { 00:35:54.571 "name": "Nvme$subsystem", 00:35:54.571 "trtype": "$TEST_TRANSPORT", 00:35:54.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:54.571 "adrfam": "ipv4", 00:35:54.571 "trsvcid": "$NVMF_PORT", 00:35:54.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:54.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:54.571 "hdgst": ${hdgst:-false}, 00:35:54.571 "ddgst": ${ddgst:-false} 00:35:54.571 }, 00:35:54.571 "method": "bdev_nvme_attach_controller" 00:35:54.571 } 00:35:54.571 EOF 00:35:54.571 )") 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:54.571 { 00:35:54.571 "params": { 00:35:54.571 "name": "Nvme$subsystem", 00:35:54.571 "trtype": "$TEST_TRANSPORT", 00:35:54.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:54.571 "adrfam": "ipv4", 00:35:54.571 "trsvcid": "$NVMF_PORT", 00:35:54.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:54.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:54.571 "hdgst": ${hdgst:-false}, 00:35:54.571 "ddgst": ${ddgst:-false} 00:35:54.571 }, 00:35:54.571 "method": "bdev_nvme_attach_controller" 00:35:54.571 } 00:35:54.571 EOF 00:35:54.571 )") 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1266156 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:54.571 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:54.571 "params": { 00:35:54.571 "name": "Nvme1", 00:35:54.571 "trtype": "tcp", 00:35:54.571 "traddr": "10.0.0.2", 00:35:54.571 "adrfam": "ipv4", 00:35:54.571 "trsvcid": "4420", 00:35:54.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:54.571 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:54.572 "hdgst": false, 00:35:54.572 "ddgst": false 00:35:54.572 }, 00:35:54.572 "method": "bdev_nvme_attach_controller" 00:35:54.572 }' 00:35:54.572 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:54.572 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:54.572 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:54.572 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:54.572 "params": { 00:35:54.572 "name": "Nvme1", 00:35:54.572 "trtype": "tcp", 00:35:54.572 "traddr": "10.0.0.2", 00:35:54.572 "adrfam": "ipv4", 00:35:54.572 "trsvcid": "4420", 00:35:54.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:54.572 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:54.572 "hdgst": false, 00:35:54.572 "ddgst": false 00:35:54.572 }, 00:35:54.572 "method": "bdev_nvme_attach_controller" 00:35:54.572 }' 00:35:54.572 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:54.572 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:54.572 "params": { 00:35:54.572 "name": "Nvme1", 00:35:54.572 "trtype": "tcp", 00:35:54.572 "traddr": "10.0.0.2", 00:35:54.572 "adrfam": "ipv4", 00:35:54.572 "trsvcid": "4420", 00:35:54.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:54.572 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:54.572 "hdgst": false, 00:35:54.572 "ddgst": false 00:35:54.572 }, 00:35:54.572 "method": "bdev_nvme_attach_controller" 00:35:54.572 }' 00:35:54.572 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:54.572 11:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:54.572 "params": { 00:35:54.572 "name": "Nvme1", 00:35:54.572 "trtype": "tcp", 00:35:54.572 "traddr": "10.0.0.2", 00:35:54.572 "adrfam": "ipv4", 00:35:54.572 "trsvcid": "4420", 00:35:54.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:54.572 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:54.572 "hdgst": false, 00:35:54.572 "ddgst": false 00:35:54.572 }, 00:35:54.572 "method": "bdev_nvme_attach_controller" 00:35:54.572 }' 00:35:54.833 [2024-11-19 11:03:33.771792] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:35:54.833 [2024-11-19 11:03:33.771866] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:35:54.833 [2024-11-19 11:03:33.771942] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:35:54.833 [2024-11-19 11:03:33.772004] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:35:54.833 [2024-11-19 11:03:33.773555] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:35:54.833 [2024-11-19 11:03:33.773623] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:35:54.833 [2024-11-19 11:03:33.777809] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:35:54.834 [2024-11-19 11:03:33.777873] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:35:54.834 [2024-11-19 11:03:33.995394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.095 [2024-11-19 11:03:34.035312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:55.095 [2024-11-19 11:03:34.086577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.095 [2024-11-19 11:03:34.126343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:55.095 [2024-11-19 11:03:34.180325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.095 [2024-11-19 11:03:34.220879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:55.095 [2024-11-19 11:03:34.247712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.095 [2024-11-19 11:03:34.286136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:55.357 Running I/O for 1 seconds... 00:35:55.357 Running I/O for 1 seconds... 00:35:55.357 Running I/O for 1 seconds... 00:35:55.357 Running I/O for 1 seconds... 00:35:56.303 13053.00 IOPS, 50.99 MiB/s 00:35:56.303 Latency(us) 00:35:56.303 [2024-11-19T10:03:35.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.303 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:35:56.303 Nvme1n1 : 1.01 13112.40 51.22 0.00 0.00 9729.77 2307.41 12888.75 00:35:56.303 [2024-11-19T10:03:35.498Z] =================================================================================================================== 00:35:56.303 [2024-11-19T10:03:35.498Z] Total : 13112.40 51.22 0.00 0.00 9729.77 2307.41 12888.75 00:35:56.303 6966.00 IOPS, 27.21 MiB/s 00:35:56.303 Latency(us) 00:35:56.303 [2024-11-19T10:03:35.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.303 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:35:56.303 Nvme1n1 : 1.02 6942.64 27.12 0.00 0.00 18231.61 2239.15 34515.63 00:35:56.303 [2024-11-19T10:03:35.498Z] =================================================================================================================== 00:35:56.303 [2024-11-19T10:03:35.498Z] Total : 6942.64 27.12 0.00 0.00 18231.61 2239.15 34515.63 00:35:56.303 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1266158 00:35:56.303 187168.00 IOPS, 731.12 MiB/s 00:35:56.303 Latency(us) 00:35:56.303 [2024-11-19T10:03:35.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.303 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:35:56.303 Nvme1n1 : 1.00 186791.77 729.66 0.00 0.00 681.57 305.49 1979.73 00:35:56.303 [2024-11-19T10:03:35.498Z] =================================================================================================================== 00:35:56.303 [2024-11-19T10:03:35.498Z] Total : 186791.77 729.66 0.00 0.00 681.57 305.49 1979.73 00:35:56.564 6911.00 IOPS, 27.00 MiB/s 00:35:56.565 Latency(us) 00:35:56.565 [2024-11-19T10:03:35.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.565 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:35:56.565 Nvme1n1 : 1.01 6996.29 27.33 0.00 0.00 18232.97 5079.04 37355.52 00:35:56.565 [2024-11-19T10:03:35.760Z] =================================================================================================================== 00:35:56.565 [2024-11-19T10:03:35.760Z] Total : 6996.29 27.33 0.00 0.00 18232.97 5079.04 37355.52 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1266160 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1266163 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:56.565 rmmod nvme_tcp 00:35:56.565 rmmod nvme_fabrics 00:35:56.565 rmmod nvme_keyring 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1265821 ']' 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1265821 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1265821 ']' 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1265821 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:56.565 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1265821 00:35:56.827 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:56.827 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:56.827 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1265821' 00:35:56.827 killing process with pid 1265821 00:35:56.827 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1265821 00:35:56.827 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1265821 00:35:56.827 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:56.827 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:56.827 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:56.827 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:35:56.827 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:35:56.827 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:56.827 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:35:56.827 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:56.827 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:56.827 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:56.827 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:56.827 11:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:59.372 00:35:59.372 real 0m13.147s 00:35:59.372 user 0m15.969s 00:35:59.372 sys 0m7.748s 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:59.372 ************************************ 00:35:59.372 END TEST nvmf_bdev_io_wait 00:35:59.372 ************************************ 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:59.372 ************************************ 00:35:59.372 START TEST nvmf_queue_depth 00:35:59.372 ************************************ 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:59.372 * Looking for test storage... 00:35:59.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:35:59.372 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:59.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.373 --rc genhtml_branch_coverage=1 00:35:59.373 --rc genhtml_function_coverage=1 00:35:59.373 --rc genhtml_legend=1 00:35:59.373 --rc geninfo_all_blocks=1 00:35:59.373 --rc geninfo_unexecuted_blocks=1 00:35:59.373 00:35:59.373 ' 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:59.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.373 --rc genhtml_branch_coverage=1 00:35:59.373 --rc genhtml_function_coverage=1 00:35:59.373 --rc genhtml_legend=1 00:35:59.373 --rc geninfo_all_blocks=1 00:35:59.373 --rc geninfo_unexecuted_blocks=1 00:35:59.373 00:35:59.373 ' 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:59.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.373 --rc genhtml_branch_coverage=1 00:35:59.373 --rc genhtml_function_coverage=1 00:35:59.373 --rc genhtml_legend=1 00:35:59.373 --rc geninfo_all_blocks=1 00:35:59.373 --rc geninfo_unexecuted_blocks=1 00:35:59.373 00:35:59.373 ' 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:59.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.373 --rc genhtml_branch_coverage=1 00:35:59.373 --rc genhtml_function_coverage=1 00:35:59.373 --rc genhtml_legend=1 00:35:59.373 --rc geninfo_all_blocks=1 00:35:59.373 --rc geninfo_unexecuted_blocks=1 00:35:59.373 00:35:59.373 ' 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.373 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:35:59.374 11:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:07.520 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:07.520 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:07.520 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:07.520 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:07.520 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:07.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:07.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.708 ms 00:36:07.521 00:36:07.521 --- 10.0.0.2 ping statistics --- 00:36:07.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:07.521 rtt min/avg/max/mdev = 0.708/0.708/0.708/0.000 ms 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:07.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:07.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:36:07.521 00:36:07.521 --- 10.0.0.1 ping statistics --- 00:36:07.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:07.521 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1270683 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1270683 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1270683 ']' 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:07.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:07.521 11:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:07.521 [2024-11-19 11:03:45.930171] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:07.521 [2024-11-19 11:03:45.931317] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:36:07.521 [2024-11-19 11:03:45.931366] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:07.521 [2024-11-19 11:03:46.036376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:07.521 [2024-11-19 11:03:46.087568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:07.521 [2024-11-19 11:03:46.087624] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:07.521 [2024-11-19 11:03:46.087634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:07.521 [2024-11-19 11:03:46.087641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:07.521 [2024-11-19 11:03:46.087648] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:07.521 [2024-11-19 11:03:46.088456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:07.521 [2024-11-19 11:03:46.165217] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:07.521 [2024-11-19 11:03:46.165521] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:07.784 [2024-11-19 11:03:46.809329] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:07.784 Malloc0 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:07.784 [2024-11-19 11:03:46.889504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1270878 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1270878 /var/tmp/bdevperf.sock 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1270878 ']' 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:07.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:07.784 11:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:07.784 [2024-11-19 11:03:46.946213] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:36:07.784 [2024-11-19 11:03:46.946280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1270878 ] 00:36:08.046 [2024-11-19 11:03:47.040323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:08.046 [2024-11-19 11:03:47.092075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:08.619 11:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:08.619 11:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:36:08.619 11:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:08.619 11:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.619 11:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:08.879 NVMe0n1 00:36:08.880 11:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.880 11:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:08.880 Running I/O for 10 seconds... 00:36:10.953 8199.00 IOPS, 32.03 MiB/s [2024-11-19T10:03:51.091Z] 8704.00 IOPS, 34.00 MiB/s [2024-11-19T10:03:52.475Z] 9218.33 IOPS, 36.01 MiB/s [2024-11-19T10:03:53.418Z] 10239.75 IOPS, 40.00 MiB/s [2024-11-19T10:03:54.359Z] 10873.40 IOPS, 42.47 MiB/s [2024-11-19T10:03:55.301Z] 11286.50 IOPS, 44.09 MiB/s [2024-11-19T10:03:56.243Z] 11620.86 IOPS, 45.39 MiB/s [2024-11-19T10:03:57.186Z] 11885.38 IOPS, 46.43 MiB/s [2024-11-19T10:03:58.129Z] 12064.78 IOPS, 47.13 MiB/s [2024-11-19T10:03:58.129Z] 12192.40 IOPS, 47.63 MiB/s 00:36:18.934 Latency(us) 00:36:18.934 [2024-11-19T10:03:58.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:18.934 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:36:18.934 Verification LBA range: start 0x0 length 0x4000 00:36:18.934 NVMe0n1 : 10.05 12237.54 47.80 0.00 0.00 83414.58 11250.35 74274.13 00:36:18.934 [2024-11-19T10:03:58.129Z] =================================================================================================================== 00:36:18.934 [2024-11-19T10:03:58.129Z] Total : 12237.54 47.80 0.00 0.00 83414.58 11250.35 74274.13 00:36:18.934 { 00:36:18.934 "results": [ 00:36:18.934 { 00:36:18.934 "job": "NVMe0n1", 00:36:18.934 "core_mask": "0x1", 00:36:18.934 "workload": "verify", 00:36:18.934 "status": "finished", 00:36:18.934 "verify_range": { 00:36:18.934 "start": 0, 00:36:18.934 "length": 16384 00:36:18.934 }, 00:36:18.934 "queue_depth": 1024, 00:36:18.934 "io_size": 4096, 00:36:18.934 "runtime": 10.046792, 00:36:18.934 "iops": 12237.538111667884, 00:36:18.934 "mibps": 47.80288324870267, 00:36:18.934 "io_failed": 0, 00:36:18.934 "io_timeout": 0, 00:36:18.934 "avg_latency_us": 83414.58380800554, 00:36:18.934 "min_latency_us": 11250.346666666666, 00:36:18.934 "max_latency_us": 74274.13333333333 00:36:18.934 } 00:36:18.934 ], 00:36:18.934 "core_count": 1 00:36:18.934 } 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1270878 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1270878 ']' 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1270878 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1270878 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1270878' 00:36:19.195 killing process with pid 1270878 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1270878 00:36:19.195 Received shutdown signal, test time was about 10.000000 seconds 00:36:19.195 00:36:19.195 Latency(us) 00:36:19.195 [2024-11-19T10:03:58.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:19.195 [2024-11-19T10:03:58.390Z] =================================================================================================================== 00:36:19.195 [2024-11-19T10:03:58.390Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1270878 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:19.195 rmmod nvme_tcp 00:36:19.195 rmmod nvme_fabrics 00:36:19.195 rmmod nvme_keyring 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1270683 ']' 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1270683 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1270683 ']' 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1270683 00:36:19.195 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:36:19.456 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:19.456 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1270683 00:36:19.456 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:19.456 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:19.456 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1270683' 00:36:19.456 killing process with pid 1270683 00:36:19.456 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1270683 00:36:19.456 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1270683 00:36:19.456 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:19.456 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:19.456 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:19.456 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:36:19.456 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:36:19.456 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:19.456 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:36:19.456 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:19.456 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:19.456 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:19.456 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:19.456 11:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:22.002 00:36:22.002 real 0m22.520s 00:36:22.002 user 0m24.723s 00:36:22.002 sys 0m7.497s 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:22.002 ************************************ 00:36:22.002 END TEST nvmf_queue_depth 00:36:22.002 ************************************ 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:22.002 ************************************ 00:36:22.002 START TEST nvmf_target_multipath 00:36:22.002 ************************************ 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:36:22.002 * Looking for test storage... 00:36:22.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:36:22.002 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:22.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.003 --rc genhtml_branch_coverage=1 00:36:22.003 --rc genhtml_function_coverage=1 00:36:22.003 --rc genhtml_legend=1 00:36:22.003 --rc geninfo_all_blocks=1 00:36:22.003 --rc geninfo_unexecuted_blocks=1 00:36:22.003 00:36:22.003 ' 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:22.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.003 --rc genhtml_branch_coverage=1 00:36:22.003 --rc genhtml_function_coverage=1 00:36:22.003 --rc genhtml_legend=1 00:36:22.003 --rc geninfo_all_blocks=1 00:36:22.003 --rc geninfo_unexecuted_blocks=1 00:36:22.003 00:36:22.003 ' 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:22.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.003 --rc genhtml_branch_coverage=1 00:36:22.003 --rc genhtml_function_coverage=1 00:36:22.003 --rc genhtml_legend=1 00:36:22.003 --rc geninfo_all_blocks=1 00:36:22.003 --rc geninfo_unexecuted_blocks=1 00:36:22.003 00:36:22.003 ' 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:22.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.003 --rc genhtml_branch_coverage=1 00:36:22.003 --rc genhtml_function_coverage=1 00:36:22.003 --rc genhtml_legend=1 00:36:22.003 --rc geninfo_all_blocks=1 00:36:22.003 --rc geninfo_unexecuted_blocks=1 00:36:22.003 00:36:22.003 ' 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.003 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:36:22.004 11:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:30.150 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:30.150 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:30.150 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:30.151 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:30.151 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:30.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:30.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:36:30.151 00:36:30.151 --- 10.0.0.2 ping statistics --- 00:36:30.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:30.151 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:30.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:30.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:36:30.151 00:36:30.151 --- 10.0.0.1 ping statistics --- 00:36:30.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:30.151 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:36:30.151 only one NIC for nvmf test 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:30.151 rmmod nvme_tcp 00:36:30.151 rmmod nvme_fabrics 00:36:30.151 rmmod nvme_keyring 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:30.151 11:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:31.538 00:36:31.538 real 0m9.896s 00:36:31.538 user 0m2.132s 00:36:31.538 sys 0m5.712s 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:36:31.538 ************************************ 00:36:31.538 END TEST nvmf_target_multipath 00:36:31.538 ************************************ 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:31.538 ************************************ 00:36:31.538 START TEST nvmf_zcopy 00:36:31.538 ************************************ 00:36:31.538 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:36:31.801 * Looking for test storage... 00:36:31.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:31.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.801 --rc genhtml_branch_coverage=1 00:36:31.801 --rc genhtml_function_coverage=1 00:36:31.801 --rc genhtml_legend=1 00:36:31.801 --rc geninfo_all_blocks=1 00:36:31.801 --rc geninfo_unexecuted_blocks=1 00:36:31.801 00:36:31.801 ' 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:31.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.801 --rc genhtml_branch_coverage=1 00:36:31.801 --rc genhtml_function_coverage=1 00:36:31.801 --rc genhtml_legend=1 00:36:31.801 --rc geninfo_all_blocks=1 00:36:31.801 --rc geninfo_unexecuted_blocks=1 00:36:31.801 00:36:31.801 ' 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:31.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.801 --rc genhtml_branch_coverage=1 00:36:31.801 --rc genhtml_function_coverage=1 00:36:31.801 --rc genhtml_legend=1 00:36:31.801 --rc geninfo_all_blocks=1 00:36:31.801 --rc geninfo_unexecuted_blocks=1 00:36:31.801 00:36:31.801 ' 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:31.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.801 --rc genhtml_branch_coverage=1 00:36:31.801 --rc genhtml_function_coverage=1 00:36:31.801 --rc genhtml_legend=1 00:36:31.801 --rc geninfo_all_blocks=1 00:36:31.801 --rc geninfo_unexecuted_blocks=1 00:36:31.801 00:36:31.801 ' 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.801 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:36:31.802 11:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:39.954 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:39.954 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:36:39.954 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:39.954 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:39.954 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:39.954 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:39.954 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:39.954 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:36:39.955 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:39.955 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:36:39.955 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:36:39.955 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:36:39.955 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:36:39.955 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:36:39.955 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:36:39.955 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:39.955 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:39.955 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:39.955 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:39.955 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:39.955 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:39.955 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:39.955 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:39.955 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:39.955 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:39.955 11:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:39.955 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:39.955 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:39.955 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:39.955 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:39.955 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:39.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:39.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:36:39.956 00:36:39.956 --- 10.0.0.2 ping statistics --- 00:36:39.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:39.956 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:39.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:39.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:36:39.956 00:36:39.956 --- 10.0.0.1 ping statistics --- 00:36:39.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:39.956 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1281271 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1281271 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1281271 ']' 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:39.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:39.956 11:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:39.956 [2024-11-19 11:04:18.411981] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:39.956 [2024-11-19 11:04:18.413123] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:36:39.956 [2024-11-19 11:04:18.413194] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:39.956 [2024-11-19 11:04:18.498431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.956 [2024-11-19 11:04:18.549032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:39.956 [2024-11-19 11:04:18.549081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:39.956 [2024-11-19 11:04:18.549091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:39.956 [2024-11-19 11:04:18.549098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:39.956 [2024-11-19 11:04:18.549104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:39.956 [2024-11-19 11:04:18.549873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:39.956 [2024-11-19 11:04:18.626389] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:39.956 [2024-11-19 11:04:18.626688] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:40.219 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:40.219 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:36:40.219 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:40.219 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:40.219 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:40.219 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:40.219 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:36:40.219 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:36:40.219 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.220 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:40.220 [2024-11-19 11:04:19.262729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:40.220 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.220 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:40.220 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.220 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:40.220 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.220 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:40.220 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.220 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:40.220 [2024-11-19 11:04:19.291004] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:40.220 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.220 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:40.220 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.220 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:40.220 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.220 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:36:40.221 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.221 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:40.221 malloc0 00:36:40.221 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.221 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:36:40.221 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.221 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:40.221 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.221 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:36:40.221 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:36:40.222 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:36:40.222 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:36:40.222 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:40.222 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:40.222 { 00:36:40.222 "params": { 00:36:40.222 "name": "Nvme$subsystem", 00:36:40.222 "trtype": "$TEST_TRANSPORT", 00:36:40.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:40.222 "adrfam": "ipv4", 00:36:40.222 "trsvcid": "$NVMF_PORT", 00:36:40.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:40.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:40.222 "hdgst": ${hdgst:-false}, 00:36:40.222 "ddgst": ${ddgst:-false} 00:36:40.222 }, 00:36:40.222 "method": "bdev_nvme_attach_controller" 00:36:40.222 } 00:36:40.222 EOF 00:36:40.222 )") 00:36:40.222 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:36:40.222 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:36:40.222 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:36:40.222 11:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:40.222 "params": { 00:36:40.223 "name": "Nvme1", 00:36:40.223 "trtype": "tcp", 00:36:40.223 "traddr": "10.0.0.2", 00:36:40.223 "adrfam": "ipv4", 00:36:40.223 "trsvcid": "4420", 00:36:40.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:40.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:40.223 "hdgst": false, 00:36:40.223 "ddgst": false 00:36:40.223 }, 00:36:40.223 "method": "bdev_nvme_attach_controller" 00:36:40.223 }' 00:36:40.223 [2024-11-19 11:04:19.394993] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:36:40.223 [2024-11-19 11:04:19.395060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281564 ] 00:36:40.489 [2024-11-19 11:04:19.487357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:40.489 [2024-11-19 11:04:19.539851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:40.749 Running I/O for 10 seconds... 00:36:42.638 6364.00 IOPS, 49.72 MiB/s [2024-11-19T10:04:22.778Z] 6427.50 IOPS, 50.21 MiB/s [2024-11-19T10:04:24.164Z] 6454.67 IOPS, 50.43 MiB/s [2024-11-19T10:04:25.105Z] 6462.00 IOPS, 50.48 MiB/s [2024-11-19T10:04:26.088Z] 6653.00 IOPS, 51.98 MiB/s [2024-11-19T10:04:27.028Z] 7148.17 IOPS, 55.85 MiB/s [2024-11-19T10:04:27.970Z] 7504.43 IOPS, 58.63 MiB/s [2024-11-19T10:04:28.913Z] 7772.50 IOPS, 60.72 MiB/s [2024-11-19T10:04:29.855Z] 7980.33 IOPS, 62.35 MiB/s [2024-11-19T10:04:29.855Z] 8143.90 IOPS, 63.62 MiB/s 00:36:50.660 Latency(us) 00:36:50.660 [2024-11-19T10:04:29.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:50.660 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:36:50.660 Verification LBA range: start 0x0 length 0x1000 00:36:50.660 Nvme1n1 : 10.01 8149.06 63.66 0.00 0.00 15662.46 1624.75 27634.35 00:36:50.660 [2024-11-19T10:04:29.855Z] =================================================================================================================== 00:36:50.660 [2024-11-19T10:04:29.855Z] Total : 8149.06 63.66 0.00 0.00 15662.46 1624.75 27634.35 00:36:50.920 11:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1283561 00:36:50.920 11:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:36:50.920 11:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:36:50.920 11:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:50.920 11:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:36:50.920 11:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:36:50.920 11:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:50.921 11:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:50.921 { 00:36:50.921 "params": { 00:36:50.921 "name": "Nvme$subsystem", 00:36:50.921 "trtype": "$TEST_TRANSPORT", 00:36:50.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:50.921 "adrfam": "ipv4", 00:36:50.921 "trsvcid": "$NVMF_PORT", 00:36:50.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:50.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:50.921 "hdgst": ${hdgst:-false}, 00:36:50.921 "ddgst": ${ddgst:-false} 00:36:50.921 }, 00:36:50.921 "method": "bdev_nvme_attach_controller" 00:36:50.921 } 00:36:50.921 EOF 00:36:50.921 )") 00:36:50.921 11:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:36:50.921 11:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:36:50.921 [2024-11-19 11:04:29.882297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:29.882327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.921 11:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:36:50.921 11:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:36:50.921 11:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:50.921 "params": { 00:36:50.921 "name": "Nvme1", 00:36:50.921 "trtype": "tcp", 00:36:50.921 "traddr": "10.0.0.2", 00:36:50.921 "adrfam": "ipv4", 00:36:50.921 "trsvcid": "4420", 00:36:50.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:50.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:50.921 "hdgst": false, 00:36:50.921 "ddgst": false 00:36:50.921 }, 00:36:50.921 "method": "bdev_nvme_attach_controller" 00:36:50.921 }' 00:36:50.921 [2024-11-19 11:04:29.894266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:29.894276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.921 [2024-11-19 11:04:29.906264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:29.906273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.921 [2024-11-19 11:04:29.918264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:29.918273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.921 [2024-11-19 11:04:29.925762] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:36:50.921 [2024-11-19 11:04:29.925812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1283561 ] 00:36:50.921 [2024-11-19 11:04:29.930264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:29.930272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.921 [2024-11-19 11:04:29.942264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:29.942272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.921 [2024-11-19 11:04:29.954263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:29.954272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.921 [2024-11-19 11:04:29.966263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:29.966271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.921 [2024-11-19 11:04:29.978264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:29.978272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.921 [2024-11-19 11:04:29.990263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:29.990271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.921 [2024-11-19 11:04:30.002264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:30.002273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.921 [2024-11-19 11:04:30.009013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:50.921 [2024-11-19 11:04:30.014268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:30.014278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.921 [2024-11-19 11:04:30.026274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:30.026290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.921 [2024-11-19 11:04:30.038285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:30.038309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.921 [2024-11-19 11:04:30.039602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:50.921 [2024-11-19 11:04:30.050267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:30.050277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.921 [2024-11-19 11:04:30.062268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:30.062280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.921 [2024-11-19 11:04:30.074268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:30.074280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.921 [2024-11-19 11:04:30.086266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:30.086278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.921 [2024-11-19 11:04:30.098265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:30.098274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:50.921 [2024-11-19 11:04:30.110270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:50.921 [2024-11-19 11:04:30.110286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.183 [2024-11-19 11:04:30.122266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.183 [2024-11-19 11:04:30.122276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.183 [2024-11-19 11:04:30.134266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.183 [2024-11-19 11:04:30.134278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.183 [2024-11-19 11:04:30.146263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.183 [2024-11-19 11:04:30.146271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.183 [2024-11-19 11:04:30.158263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.183 [2024-11-19 11:04:30.158271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.183 [2024-11-19 11:04:30.170263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.183 [2024-11-19 11:04:30.170271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.183 [2024-11-19 11:04:30.182264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.183 [2024-11-19 11:04:30.182274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.183 [2024-11-19 11:04:30.194266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.183 [2024-11-19 11:04:30.194277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.183 [2024-11-19 11:04:30.206269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.183 [2024-11-19 11:04:30.206284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.183 Running I/O for 5 seconds... 00:36:51.183 [2024-11-19 11:04:30.218268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.183 [2024-11-19 11:04:30.218281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.183 [2024-11-19 11:04:30.234151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.183 [2024-11-19 11:04:30.234173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.183 [2024-11-19 11:04:30.247475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.183 [2024-11-19 11:04:30.247491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.183 [2024-11-19 11:04:30.261354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.183 [2024-11-19 11:04:30.261369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.183 [2024-11-19 11:04:30.274397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.183 [2024-11-19 11:04:30.274413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.184 [2024-11-19 11:04:30.287067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.184 [2024-11-19 11:04:30.287083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.184 [2024-11-19 11:04:30.301575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.184 [2024-11-19 11:04:30.301590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.184 [2024-11-19 11:04:30.314850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.184 [2024-11-19 11:04:30.314865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.184 [2024-11-19 11:04:30.327014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.184 [2024-11-19 11:04:30.327029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.184 [2024-11-19 11:04:30.339604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.184 [2024-11-19 11:04:30.339619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.184 [2024-11-19 11:04:30.353242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.184 [2024-11-19 11:04:30.353258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.184 [2024-11-19 11:04:30.366276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.184 [2024-11-19 11:04:30.366291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.379309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.379326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.393845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.393860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.406738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.406753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.421246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.421261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.434463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.434479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.447644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.447659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.461149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.461167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.473990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.474006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.487570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.487585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.501965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.501980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.514914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.514928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.529506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.529522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.542697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.542711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.557285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.557299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.570080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.570094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.582952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.582966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.597250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.597265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.610080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.610096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.623134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.623150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.444 [2024-11-19 11:04:30.637713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.444 [2024-11-19 11:04:30.637729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.705 [2024-11-19 11:04:30.650792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.705 [2024-11-19 11:04:30.650807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.705 [2024-11-19 11:04:30.665554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.705 [2024-11-19 11:04:30.665569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.705 [2024-11-19 11:04:30.678939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.705 [2024-11-19 11:04:30.678954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.705 [2024-11-19 11:04:30.693365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.705 [2024-11-19 11:04:30.693380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.705 [2024-11-19 11:04:30.706338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.705 [2024-11-19 11:04:30.706353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.705 [2024-11-19 11:04:30.719344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.705 [2024-11-19 11:04:30.719359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.705 [2024-11-19 11:04:30.733270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.705 [2024-11-19 11:04:30.733285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.705 [2024-11-19 11:04:30.746482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.705 [2024-11-19 11:04:30.746497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.705 [2024-11-19 11:04:30.759556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.705 [2024-11-19 11:04:30.759571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.705 [2024-11-19 11:04:30.773357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.705 [2024-11-19 11:04:30.773376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.705 [2024-11-19 11:04:30.786320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.705 [2024-11-19 11:04:30.786336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.705 [2024-11-19 11:04:30.799258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.705 [2024-11-19 11:04:30.799273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.705 [2024-11-19 11:04:30.813827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.705 [2024-11-19 11:04:30.813842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.705 [2024-11-19 11:04:30.826642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.705 [2024-11-19 11:04:30.826657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.705 [2024-11-19 11:04:30.841382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.705 [2024-11-19 11:04:30.841398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.705 [2024-11-19 11:04:30.854250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.706 [2024-11-19 11:04:30.854266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.706 [2024-11-19 11:04:30.867371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.706 [2024-11-19 11:04:30.867386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.706 [2024-11-19 11:04:30.881514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.706 [2024-11-19 11:04:30.881529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.706 [2024-11-19 11:04:30.894439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.706 [2024-11-19 11:04:30.894454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.967 [2024-11-19 11:04:30.907683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.967 [2024-11-19 11:04:30.907699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.967 [2024-11-19 11:04:30.921632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.967 [2024-11-19 11:04:30.921648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.967 [2024-11-19 11:04:30.934727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.967 [2024-11-19 11:04:30.934742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.967 [2024-11-19 11:04:30.949248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.967 [2024-11-19 11:04:30.949264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.967 [2024-11-19 11:04:30.962409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.967 [2024-11-19 11:04:30.962424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.967 [2024-11-19 11:04:30.975281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.967 [2024-11-19 11:04:30.975298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.967 [2024-11-19 11:04:30.989182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.967 [2024-11-19 11:04:30.989197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.967 [2024-11-19 11:04:31.002275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.967 [2024-11-19 11:04:31.002291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.967 [2024-11-19 11:04:31.015718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.967 [2024-11-19 11:04:31.015734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.967 [2024-11-19 11:04:31.029525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.967 [2024-11-19 11:04:31.029548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.967 [2024-11-19 11:04:31.042634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.967 [2024-11-19 11:04:31.042650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.967 [2024-11-19 11:04:31.057504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.967 [2024-11-19 11:04:31.057520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.967 [2024-11-19 11:04:31.070464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.967 [2024-11-19 11:04:31.070479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.967 [2024-11-19 11:04:31.083268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.967 [2024-11-19 11:04:31.083283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.967 [2024-11-19 11:04:31.097548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.967 [2024-11-19 11:04:31.097564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.967 [2024-11-19 11:04:31.110775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.967 [2024-11-19 11:04:31.110790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.967 [2024-11-19 11:04:31.126054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.967 [2024-11-19 11:04:31.126070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.967 [2024-11-19 11:04:31.139252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.967 [2024-11-19 11:04:31.139267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:51.967 [2024-11-19 11:04:31.153644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:51.967 [2024-11-19 11:04:31.153660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.229 [2024-11-19 11:04:31.166473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.229 [2024-11-19 11:04:31.166489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.229 [2024-11-19 11:04:31.179338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.229 [2024-11-19 11:04:31.179353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.230 [2024-11-19 11:04:31.193019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.230 [2024-11-19 11:04:31.193035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.230 [2024-11-19 11:04:31.206595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.230 [2024-11-19 11:04:31.206610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.230 19063.00 IOPS, 148.93 MiB/s [2024-11-19T10:04:31.425Z] [2024-11-19 11:04:31.221126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.230 [2024-11-19 11:04:31.221141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.230 [2024-11-19 11:04:31.234485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.230 [2024-11-19 11:04:31.234501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.230 [2024-11-19 11:04:31.247481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.230 [2024-11-19 11:04:31.247496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.230 [2024-11-19 11:04:31.261550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.230 [2024-11-19 11:04:31.261566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.230 [2024-11-19 11:04:31.274614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.230 [2024-11-19 11:04:31.274629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.230 [2024-11-19 11:04:31.289148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.230 [2024-11-19 11:04:31.289172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.230 [2024-11-19 11:04:31.302322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.230 [2024-11-19 11:04:31.302337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.230 [2024-11-19 11:04:31.315391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.230 [2024-11-19 11:04:31.315407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.230 [2024-11-19 11:04:31.329267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.230 [2024-11-19 11:04:31.329283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.230 [2024-11-19 11:04:31.342430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.230 [2024-11-19 11:04:31.342446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.230 [2024-11-19 11:04:31.355155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.230 [2024-11-19 11:04:31.355176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.230 [2024-11-19 11:04:31.369667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.230 [2024-11-19 11:04:31.369683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.230 [2024-11-19 11:04:31.382782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.230 [2024-11-19 11:04:31.382797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.230 [2024-11-19 11:04:31.397468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.230 [2024-11-19 11:04:31.397484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.230 [2024-11-19 11:04:31.410443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.230 [2024-11-19 11:04:31.410458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.230 [2024-11-19 11:04:31.423131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.230 [2024-11-19 11:04:31.423146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.492 [2024-11-19 11:04:31.437800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.492 [2024-11-19 11:04:31.437816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.492 [2024-11-19 11:04:31.450837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.492 [2024-11-19 11:04:31.450852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.492 [2024-11-19 11:04:31.465209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.492 [2024-11-19 11:04:31.465224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.492 [2024-11-19 11:04:31.478244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.492 [2024-11-19 11:04:31.478260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.492 [2024-11-19 11:04:31.490810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.492 [2024-11-19 11:04:31.490825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.492 [2024-11-19 11:04:31.504772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.492 [2024-11-19 11:04:31.504787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.492 [2024-11-19 11:04:31.517883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.492 [2024-11-19 11:04:31.517899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.492 [2024-11-19 11:04:31.530267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.492 [2024-11-19 11:04:31.530282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.492 [2024-11-19 11:04:31.543076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.492 [2024-11-19 11:04:31.543095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.492 [2024-11-19 11:04:31.557978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.492 [2024-11-19 11:04:31.557994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.492 [2024-11-19 11:04:31.570836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.492 [2024-11-19 11:04:31.570851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.492 [2024-11-19 11:04:31.585684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.492 [2024-11-19 11:04:31.585699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.492 [2024-11-19 11:04:31.598808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.492 [2024-11-19 11:04:31.598823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.492 [2024-11-19 11:04:31.613580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.492 [2024-11-19 11:04:31.613595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.492 [2024-11-19 11:04:31.626661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.492 [2024-11-19 11:04:31.626675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.492 [2024-11-19 11:04:31.640889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.492 [2024-11-19 11:04:31.640905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.492 [2024-11-19 11:04:31.653892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.492 [2024-11-19 11:04:31.653907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.492 [2024-11-19 11:04:31.667263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.492 [2024-11-19 11:04:31.667278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.492 [2024-11-19 11:04:31.681492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.492 [2024-11-19 11:04:31.681507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.752 [2024-11-19 11:04:31.694581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.752 [2024-11-19 11:04:31.694597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.752 [2024-11-19 11:04:31.709367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.752 [2024-11-19 11:04:31.709382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.752 [2024-11-19 11:04:31.722290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.752 [2024-11-19 11:04:31.722305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.752 [2024-11-19 11:04:31.735316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.752 [2024-11-19 11:04:31.735331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.753 [2024-11-19 11:04:31.749776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.753 [2024-11-19 11:04:31.749791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.753 [2024-11-19 11:04:31.762911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.753 [2024-11-19 11:04:31.762926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.753 [2024-11-19 11:04:31.777402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.753 [2024-11-19 11:04:31.777417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.753 [2024-11-19 11:04:31.790276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.753 [2024-11-19 11:04:31.790291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.753 [2024-11-19 11:04:31.803110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.753 [2024-11-19 11:04:31.803125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.753 [2024-11-19 11:04:31.817369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.753 [2024-11-19 11:04:31.817385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.753 [2024-11-19 11:04:31.830549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.753 [2024-11-19 11:04:31.830564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.753 [2024-11-19 11:04:31.845298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.753 [2024-11-19 11:04:31.845313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.753 [2024-11-19 11:04:31.858522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.753 [2024-11-19 11:04:31.858538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.753 [2024-11-19 11:04:31.871036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.753 [2024-11-19 11:04:31.871050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.753 [2024-11-19 11:04:31.885364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.753 [2024-11-19 11:04:31.885379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.753 [2024-11-19 11:04:31.898495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.753 [2024-11-19 11:04:31.898510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.753 [2024-11-19 11:04:31.911325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.753 [2024-11-19 11:04:31.911339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.753 [2024-11-19 11:04:31.926017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.753 [2024-11-19 11:04:31.926033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:52.753 [2024-11-19 11:04:31.939142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:52.753 [2024-11-19 11:04:31.939156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.014 [2024-11-19 11:04:31.954293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.014 [2024-11-19 11:04:31.954309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.014 [2024-11-19 11:04:31.967457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.014 [2024-11-19 11:04:31.967472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.014 [2024-11-19 11:04:31.981783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.014 [2024-11-19 11:04:31.981799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.014 [2024-11-19 11:04:31.994838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.014 [2024-11-19 11:04:31.994853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.014 [2024-11-19 11:04:32.009464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.014 [2024-11-19 11:04:32.009480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.014 [2024-11-19 11:04:32.022494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.014 [2024-11-19 11:04:32.022510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.014 [2024-11-19 11:04:32.035622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.014 [2024-11-19 11:04:32.035637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.014 [2024-11-19 11:04:32.049633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.014 [2024-11-19 11:04:32.049648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.014 [2024-11-19 11:04:32.062869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.014 [2024-11-19 11:04:32.062883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.014 [2024-11-19 11:04:32.077421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.014 [2024-11-19 11:04:32.077436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.014 [2024-11-19 11:04:32.090218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.014 [2024-11-19 11:04:32.090232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.014 [2024-11-19 11:04:32.103321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.014 [2024-11-19 11:04:32.103336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.014 [2024-11-19 11:04:32.117645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.014 [2024-11-19 11:04:32.117662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.014 [2024-11-19 11:04:32.130382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.014 [2024-11-19 11:04:32.130397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.014 [2024-11-19 11:04:32.143213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.014 [2024-11-19 11:04:32.143229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.014 [2024-11-19 11:04:32.157830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.014 [2024-11-19 11:04:32.157846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.014 [2024-11-19 11:04:32.170695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.014 [2024-11-19 11:04:32.170710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.014 [2024-11-19 11:04:32.185480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.014 [2024-11-19 11:04:32.185496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.014 [2024-11-19 11:04:32.198621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.014 [2024-11-19 11:04:32.198635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 [2024-11-19 11:04:32.213588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.213604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 19078.00 IOPS, 149.05 MiB/s [2024-11-19T10:04:32.470Z] [2024-11-19 11:04:32.226813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.226827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 [2024-11-19 11:04:32.241446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.241461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 [2024-11-19 11:04:32.254645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.254659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 [2024-11-19 11:04:32.269210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.269225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 [2024-11-19 11:04:32.282384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.282400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 [2024-11-19 11:04:32.295433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.295447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 [2024-11-19 11:04:32.309415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.309434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 [2024-11-19 11:04:32.322416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.322431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 [2024-11-19 11:04:32.335110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.335125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 [2024-11-19 11:04:32.348924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.348939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 [2024-11-19 11:04:32.361884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.361899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 [2024-11-19 11:04:32.374618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.374632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 [2024-11-19 11:04:32.389214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.389228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 [2024-11-19 11:04:32.401984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.401999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 [2024-11-19 11:04:32.414698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.414712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 [2024-11-19 11:04:32.429008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.429023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 [2024-11-19 11:04:32.441998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.442013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 [2024-11-19 11:04:32.455036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.455051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.275 [2024-11-19 11:04:32.469644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.275 [2024-11-19 11:04:32.469659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.536 [2024-11-19 11:04:32.482498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.536 [2024-11-19 11:04:32.482514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.536 [2024-11-19 11:04:32.494509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.536 [2024-11-19 11:04:32.494524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.536 [2024-11-19 11:04:32.507641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.536 [2024-11-19 11:04:32.507656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.536 [2024-11-19 11:04:32.521707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.536 [2024-11-19 11:04:32.521721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.536 [2024-11-19 11:04:32.534599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.536 [2024-11-19 11:04:32.534613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.536 [2024-11-19 11:04:32.549405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.536 [2024-11-19 11:04:32.549420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.536 [2024-11-19 11:04:32.562809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.536 [2024-11-19 11:04:32.562827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.536 [2024-11-19 11:04:32.577626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.536 [2024-11-19 11:04:32.577641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.536 [2024-11-19 11:04:32.590996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.536 [2024-11-19 11:04:32.591010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.536 [2024-11-19 11:04:32.605001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.536 [2024-11-19 11:04:32.605016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.536 [2024-11-19 11:04:32.617962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.536 [2024-11-19 11:04:32.617977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.536 [2024-11-19 11:04:32.631760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.536 [2024-11-19 11:04:32.631776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.536 [2024-11-19 11:04:32.645715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.536 [2024-11-19 11:04:32.645730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.536 [2024-11-19 11:04:32.658767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.536 [2024-11-19 11:04:32.658782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.536 [2024-11-19 11:04:32.673809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.536 [2024-11-19 11:04:32.673825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.536 [2024-11-19 11:04:32.687097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.536 [2024-11-19 11:04:32.687113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.536 [2024-11-19 11:04:32.701739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.536 [2024-11-19 11:04:32.701754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.536 [2024-11-19 11:04:32.714850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.536 [2024-11-19 11:04:32.714865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.536 [2024-11-19 11:04:32.729727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.536 [2024-11-19 11:04:32.729743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.797 [2024-11-19 11:04:32.742646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.797 [2024-11-19 11:04:32.742661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.797 [2024-11-19 11:04:32.757491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.797 [2024-11-19 11:04:32.757507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.798 [2024-11-19 11:04:32.770512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.798 [2024-11-19 11:04:32.770528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.798 [2024-11-19 11:04:32.783090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.798 [2024-11-19 11:04:32.783105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.798 [2024-11-19 11:04:32.797011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.798 [2024-11-19 11:04:32.797026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.798 [2024-11-19 11:04:32.810778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.798 [2024-11-19 11:04:32.810793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.798 [2024-11-19 11:04:32.825131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.798 [2024-11-19 11:04:32.825151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.798 [2024-11-19 11:04:32.838219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.798 [2024-11-19 11:04:32.838235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.798 [2024-11-19 11:04:32.851134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.798 [2024-11-19 11:04:32.851149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.798 [2024-11-19 11:04:32.865238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.798 [2024-11-19 11:04:32.865253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.798 [2024-11-19 11:04:32.878163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.798 [2024-11-19 11:04:32.878179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.798 [2024-11-19 11:04:32.890623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.798 [2024-11-19 11:04:32.890638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.798 [2024-11-19 11:04:32.905236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.798 [2024-11-19 11:04:32.905252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.798 [2024-11-19 11:04:32.918420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.798 [2024-11-19 11:04:32.918435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.798 [2024-11-19 11:04:32.930985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.798 [2024-11-19 11:04:32.931000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.798 [2024-11-19 11:04:32.945791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.798 [2024-11-19 11:04:32.945806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.798 [2024-11-19 11:04:32.958970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.798 [2024-11-19 11:04:32.958986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.798 [2024-11-19 11:04:32.973518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.798 [2024-11-19 11:04:32.973535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.798 [2024-11-19 11:04:32.986391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.798 [2024-11-19 11:04:32.986407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.060 [2024-11-19 11:04:32.999698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.060 [2024-11-19 11:04:32.999714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.060 [2024-11-19 11:04:33.013435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.060 [2024-11-19 11:04:33.013450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.060 [2024-11-19 11:04:33.026415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.060 [2024-11-19 11:04:33.026431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.060 [2024-11-19 11:04:33.039296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.060 [2024-11-19 11:04:33.039312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.060 [2024-11-19 11:04:33.053447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.060 [2024-11-19 11:04:33.053463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.060 [2024-11-19 11:04:33.066381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.060 [2024-11-19 11:04:33.066396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.060 [2024-11-19 11:04:33.079354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.060 [2024-11-19 11:04:33.079373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.060 [2024-11-19 11:04:33.093484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.060 [2024-11-19 11:04:33.093500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.060 [2024-11-19 11:04:33.106321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.060 [2024-11-19 11:04:33.106337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.060 [2024-11-19 11:04:33.118720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.060 [2024-11-19 11:04:33.118736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.061 [2024-11-19 11:04:33.133497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.061 [2024-11-19 11:04:33.133513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.061 [2024-11-19 11:04:33.146294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.061 [2024-11-19 11:04:33.146309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.061 [2024-11-19 11:04:33.159827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.061 [2024-11-19 11:04:33.159843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.061 [2024-11-19 11:04:33.173699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.061 [2024-11-19 11:04:33.173716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.061 [2024-11-19 11:04:33.186763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.061 [2024-11-19 11:04:33.186779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.061 [2024-11-19 11:04:33.201329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.061 [2024-11-19 11:04:33.201344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.061 [2024-11-19 11:04:33.214194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.061 [2024-11-19 11:04:33.214209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.061 19077.00 IOPS, 149.04 MiB/s [2024-11-19T10:04:33.256Z] [2024-11-19 11:04:33.227660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.061 [2024-11-19 11:04:33.227676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.061 [2024-11-19 11:04:33.241678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.061 [2024-11-19 11:04:33.241693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.061 [2024-11-19 11:04:33.255250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.061 [2024-11-19 11:04:33.255265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.322 [2024-11-19 11:04:33.269362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.322 [2024-11-19 11:04:33.269378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.322 [2024-11-19 11:04:33.282372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.322 [2024-11-19 11:04:33.282387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.322 [2024-11-19 11:04:33.294852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.322 [2024-11-19 11:04:33.294868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.322 [2024-11-19 11:04:33.309242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.322 [2024-11-19 11:04:33.309258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.322 [2024-11-19 11:04:33.322359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.322 [2024-11-19 11:04:33.322375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.322 [2024-11-19 11:04:33.335200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.322 [2024-11-19 11:04:33.335215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.322 [2024-11-19 11:04:33.349532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.322 [2024-11-19 11:04:33.349548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.322 [2024-11-19 11:04:33.362575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.322 [2024-11-19 11:04:33.362590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.322 [2024-11-19 11:04:33.377010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.322 [2024-11-19 11:04:33.377026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.322 [2024-11-19 11:04:33.389482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.322 [2024-11-19 11:04:33.389498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.323 [2024-11-19 11:04:33.402538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.323 [2024-11-19 11:04:33.402553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.323 [2024-11-19 11:04:33.417453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.323 [2024-11-19 11:04:33.417468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.323 [2024-11-19 11:04:33.430451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.323 [2024-11-19 11:04:33.430467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.323 [2024-11-19 11:04:33.443270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.323 [2024-11-19 11:04:33.443285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.323 [2024-11-19 11:04:33.457817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.323 [2024-11-19 11:04:33.457832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.323 [2024-11-19 11:04:33.470714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.323 [2024-11-19 11:04:33.470729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.323 [2024-11-19 11:04:33.485427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.323 [2024-11-19 11:04:33.485442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.323 [2024-11-19 11:04:33.498914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.323 [2024-11-19 11:04:33.498929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.323 [2024-11-19 11:04:33.513691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.323 [2024-11-19 11:04:33.513707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.584 [2024-11-19 11:04:33.526732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.584 [2024-11-19 11:04:33.526747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.584 [2024-11-19 11:04:33.541823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.584 [2024-11-19 11:04:33.541839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.584 [2024-11-19 11:04:33.554842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.584 [2024-11-19 11:04:33.554857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.584 [2024-11-19 11:04:33.569302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.584 [2024-11-19 11:04:33.569317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.584 [2024-11-19 11:04:33.582513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.584 [2024-11-19 11:04:33.582529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.584 [2024-11-19 11:04:33.595505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.584 [2024-11-19 11:04:33.595520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.584 [2024-11-19 11:04:33.609517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.584 [2024-11-19 11:04:33.609533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.584 [2024-11-19 11:04:33.622710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.584 [2024-11-19 11:04:33.622725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.584 [2024-11-19 11:04:33.637452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.584 [2024-11-19 11:04:33.637468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.584 [2024-11-19 11:04:33.650486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.584 [2024-11-19 11:04:33.650502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.584 [2024-11-19 11:04:33.663404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.584 [2024-11-19 11:04:33.663419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.584 [2024-11-19 11:04:33.677514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.584 [2024-11-19 11:04:33.677529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.584 [2024-11-19 11:04:33.690055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.584 [2024-11-19 11:04:33.690070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.584 [2024-11-19 11:04:33.702901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.584 [2024-11-19 11:04:33.702916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.584 [2024-11-19 11:04:33.717672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.584 [2024-11-19 11:04:33.717688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.584 [2024-11-19 11:04:33.730729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.584 [2024-11-19 11:04:33.730744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.584 [2024-11-19 11:04:33.745270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.584 [2024-11-19 11:04:33.745286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.584 [2024-11-19 11:04:33.758288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.584 [2024-11-19 11:04:33.758303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.584 [2024-11-19 11:04:33.771698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.584 [2024-11-19 11:04:33.771713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.844 [2024-11-19 11:04:33.785593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.844 [2024-11-19 11:04:33.785609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.844 [2024-11-19 11:04:33.798692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.844 [2024-11-19 11:04:33.798707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.844 [2024-11-19 11:04:33.813754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.844 [2024-11-19 11:04:33.813769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.844 [2024-11-19 11:04:33.827017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.844 [2024-11-19 11:04:33.827032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.844 [2024-11-19 11:04:33.842171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.844 [2024-11-19 11:04:33.842186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.844 [2024-11-19 11:04:33.855080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.844 [2024-11-19 11:04:33.855095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.844 [2024-11-19 11:04:33.869317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.844 [2024-11-19 11:04:33.869332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.845 [2024-11-19 11:04:33.882405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.845 [2024-11-19 11:04:33.882420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.845 [2024-11-19 11:04:33.895770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.845 [2024-11-19 11:04:33.895785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.845 [2024-11-19 11:04:33.909542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.845 [2024-11-19 11:04:33.909557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.845 [2024-11-19 11:04:33.922556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.845 [2024-11-19 11:04:33.922571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.845 [2024-11-19 11:04:33.937312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.845 [2024-11-19 11:04:33.937328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.845 [2024-11-19 11:04:33.950477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.845 [2024-11-19 11:04:33.950492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.845 [2024-11-19 11:04:33.963145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.845 [2024-11-19 11:04:33.963163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.845 [2024-11-19 11:04:33.977703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.845 [2024-11-19 11:04:33.977718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.845 [2024-11-19 11:04:33.991043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.845 [2024-11-19 11:04:33.991058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.845 [2024-11-19 11:04:34.005758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.845 [2024-11-19 11:04:34.005774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.845 [2024-11-19 11:04:34.018618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.845 [2024-11-19 11:04:34.018632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.845 [2024-11-19 11:04:34.033288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.845 [2024-11-19 11:04:34.033303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.106 [2024-11-19 11:04:34.046329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.106 [2024-11-19 11:04:34.046344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.107 [2024-11-19 11:04:34.059326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.107 [2024-11-19 11:04:34.059340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.107 [2024-11-19 11:04:34.073559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.107 [2024-11-19 11:04:34.073575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.107 [2024-11-19 11:04:34.086623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.107 [2024-11-19 11:04:34.086638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.107 [2024-11-19 11:04:34.101444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.107 [2024-11-19 11:04:34.101463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.107 [2024-11-19 11:04:34.114609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.107 [2024-11-19 11:04:34.114624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.107 [2024-11-19 11:04:34.128789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.107 [2024-11-19 11:04:34.128805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.107 [2024-11-19 11:04:34.141555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.107 [2024-11-19 11:04:34.141571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.107 [2024-11-19 11:04:34.154728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.107 [2024-11-19 11:04:34.154743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.107 [2024-11-19 11:04:34.169475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.107 [2024-11-19 11:04:34.169490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.107 [2024-11-19 11:04:34.182765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.107 [2024-11-19 11:04:34.182780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.107 [2024-11-19 11:04:34.196909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.107 [2024-11-19 11:04:34.196925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.107 [2024-11-19 11:04:34.210586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.107 [2024-11-19 11:04:34.210601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.107 [2024-11-19 11:04:34.225099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.107 [2024-11-19 11:04:34.225114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.107 19061.25 IOPS, 148.92 MiB/s [2024-11-19T10:04:34.302Z] [2024-11-19 11:04:34.238564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.107 [2024-11-19 11:04:34.238578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.107 [2024-11-19 11:04:34.253393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.107 [2024-11-19 11:04:34.253409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.107 [2024-11-19 11:04:34.266388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.107 [2024-11-19 11:04:34.266404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.107 [2024-11-19 11:04:34.278943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.107 [2024-11-19 11:04:34.278958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.107 [2024-11-19 11:04:34.293515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.107 [2024-11-19 11:04:34.293530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.367 [2024-11-19 11:04:34.306688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.367 [2024-11-19 11:04:34.306703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.367 [2024-11-19 11:04:34.321180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.367 [2024-11-19 11:04:34.321195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.367 [2024-11-19 11:04:34.334386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.367 [2024-11-19 11:04:34.334400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.367 [2024-11-19 11:04:34.347148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.367 [2024-11-19 11:04:34.347167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.367 [2024-11-19 11:04:34.361002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.367 [2024-11-19 11:04:34.361021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.367 [2024-11-19 11:04:34.373834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.367 [2024-11-19 11:04:34.373849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.367 [2024-11-19 11:04:34.386444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.367 [2024-11-19 11:04:34.386459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.367 [2024-11-19 11:04:34.399367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.367 [2024-11-19 11:04:34.399382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.367 [2024-11-19 11:04:34.413833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.367 [2024-11-19 11:04:34.413848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.367 [2024-11-19 11:04:34.426970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.367 [2024-11-19 11:04:34.426986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.367 [2024-11-19 11:04:34.440908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.367 [2024-11-19 11:04:34.440924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.367 [2024-11-19 11:04:34.454058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.367 [2024-11-19 11:04:34.454074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.367 [2024-11-19 11:04:34.466742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.368 [2024-11-19 11:04:34.466758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.368 [2024-11-19 11:04:34.481194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.368 [2024-11-19 11:04:34.481209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.368 [2024-11-19 11:04:34.494291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.368 [2024-11-19 11:04:34.494307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.368 [2024-11-19 11:04:34.507164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.368 [2024-11-19 11:04:34.507180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.368 [2024-11-19 11:04:34.521638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.368 [2024-11-19 11:04:34.521654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.368 [2024-11-19 11:04:34.534513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.368 [2024-11-19 11:04:34.534528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.368 [2024-11-19 11:04:34.547690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.368 [2024-11-19 11:04:34.547704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.368 [2024-11-19 11:04:34.561724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.368 [2024-11-19 11:04:34.561739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.629 [2024-11-19 11:04:34.575055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.629 [2024-11-19 11:04:34.575071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.629 [2024-11-19 11:04:34.589223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.629 [2024-11-19 11:04:34.589238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.629 [2024-11-19 11:04:34.602350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.629 [2024-11-19 11:04:34.602365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.629 [2024-11-19 11:04:34.615660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.629 [2024-11-19 11:04:34.615680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.629 [2024-11-19 11:04:34.629437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.629 [2024-11-19 11:04:34.629453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.629 [2024-11-19 11:04:34.642516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.629 [2024-11-19 11:04:34.642532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.629 [2024-11-19 11:04:34.655356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.629 [2024-11-19 11:04:34.655371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.629 [2024-11-19 11:04:34.670172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.629 [2024-11-19 11:04:34.670188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.629 [2024-11-19 11:04:34.683326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.629 [2024-11-19 11:04:34.683341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.629 [2024-11-19 11:04:34.698188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.629 [2024-11-19 11:04:34.698204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.629 [2024-11-19 11:04:34.711322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.629 [2024-11-19 11:04:34.711337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.629 [2024-11-19 11:04:34.725283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.629 [2024-11-19 11:04:34.725298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.629 [2024-11-19 11:04:34.738729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.629 [2024-11-19 11:04:34.738744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.629 [2024-11-19 11:04:34.753096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.629 [2024-11-19 11:04:34.753112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.629 [2024-11-19 11:04:34.765861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.629 [2024-11-19 11:04:34.765877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.630 [2024-11-19 11:04:34.778829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.630 [2024-11-19 11:04:34.778845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.630 [2024-11-19 11:04:34.793485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.630 [2024-11-19 11:04:34.793500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.630 [2024-11-19 11:04:34.806330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.630 [2024-11-19 11:04:34.806345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.630 [2024-11-19 11:04:34.819066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.630 [2024-11-19 11:04:34.819081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.891 [2024-11-19 11:04:34.832996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.891 [2024-11-19 11:04:34.833012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.891 [2024-11-19 11:04:34.846051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.891 [2024-11-19 11:04:34.846066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.891 [2024-11-19 11:04:34.858503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.891 [2024-11-19 11:04:34.858518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.891 [2024-11-19 11:04:34.871257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.891 [2024-11-19 11:04:34.871273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.891 [2024-11-19 11:04:34.885750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.891 [2024-11-19 11:04:34.885765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.891 [2024-11-19 11:04:34.899114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.891 [2024-11-19 11:04:34.899129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.891 [2024-11-19 11:04:34.913319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.891 [2024-11-19 11:04:34.913336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.891 [2024-11-19 11:04:34.926505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.891 [2024-11-19 11:04:34.926521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.891 [2024-11-19 11:04:34.939176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.891 [2024-11-19 11:04:34.939192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.891 [2024-11-19 11:04:34.953271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.891 [2024-11-19 11:04:34.953286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.891 [2024-11-19 11:04:34.965941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.891 [2024-11-19 11:04:34.965956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.891 [2024-11-19 11:04:34.978690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.891 [2024-11-19 11:04:34.978706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.891 [2024-11-19 11:04:34.993975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.891 [2024-11-19 11:04:34.993991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.891 [2024-11-19 11:04:35.006744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.891 [2024-11-19 11:04:35.006759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.891 [2024-11-19 11:04:35.021526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.891 [2024-11-19 11:04:35.021542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.891 [2024-11-19 11:04:35.034361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.891 [2024-11-19 11:04:35.034377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.891 [2024-11-19 11:04:35.047097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.891 [2024-11-19 11:04:35.047112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.891 [2024-11-19 11:04:35.061006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.891 [2024-11-19 11:04:35.061021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.891 [2024-11-19 11:04:35.073797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.891 [2024-11-19 11:04:35.073812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.152 [2024-11-19 11:04:35.086468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.152 [2024-11-19 11:04:35.086483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.152 [2024-11-19 11:04:35.099534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.152 [2024-11-19 11:04:35.099550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.152 [2024-11-19 11:04:35.113786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.152 [2024-11-19 11:04:35.113801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.152 [2024-11-19 11:04:35.127040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.152 [2024-11-19 11:04:35.127055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.152 [2024-11-19 11:04:35.141772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.152 [2024-11-19 11:04:35.141787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.152 [2024-11-19 11:04:35.154978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.152 [2024-11-19 11:04:35.154993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.152 [2024-11-19 11:04:35.169204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.152 [2024-11-19 11:04:35.169219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.152 [2024-11-19 11:04:35.182846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.152 [2024-11-19 11:04:35.182861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.153 [2024-11-19 11:04:35.197485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.153 [2024-11-19 11:04:35.197501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.153 [2024-11-19 11:04:35.210620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.153 [2024-11-19 11:04:35.210635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.153 [2024-11-19 11:04:35.225265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.153 [2024-11-19 11:04:35.225281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.153 19078.60 IOPS, 149.05 MiB/s [2024-11-19T10:04:35.348Z] [2024-11-19 11:04:35.236745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.153 [2024-11-19 11:04:35.236760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.153 00:36:56.153 Latency(us) 00:36:56.153 [2024-11-19T10:04:35.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:56.153 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:36:56.153 Nvme1n1 : 5.01 19080.07 149.06 0.00 0.00 6702.59 2607.79 11250.35 00:36:56.153 [2024-11-19T10:04:35.348Z] =================================================================================================================== 00:36:56.153 [2024-11-19T10:04:35.348Z] Total : 19080.07 149.06 0.00 0.00 6702.59 2607.79 11250.35 00:36:56.153 [2024-11-19 11:04:35.246267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.153 [2024-11-19 11:04:35.246279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.153 [2024-11-19 11:04:35.258274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.153 [2024-11-19 11:04:35.258287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.153 [2024-11-19 11:04:35.270270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.153 [2024-11-19 11:04:35.270284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.153 [2024-11-19 11:04:35.282268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.153 [2024-11-19 11:04:35.282282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.153 [2024-11-19 11:04:35.294268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.153 [2024-11-19 11:04:35.294278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.153 [2024-11-19 11:04:35.306265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.153 [2024-11-19 11:04:35.306275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.153 [2024-11-19 11:04:35.318269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.153 [2024-11-19 11:04:35.318285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.153 [2024-11-19 11:04:35.330266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.153 [2024-11-19 11:04:35.330276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1283561) - No such process 00:36:56.153 11:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1283561 00:36:56.153 11:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.153 11:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.153 11:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:56.413 11:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.414 11:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:56.414 11:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.414 11:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:56.414 delay0 00:36:56.414 11:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.414 11:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:36:56.414 11:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.414 11:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:56.414 11:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.414 11:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:36:56.414 [2024-11-19 11:04:35.453553] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:04.558 [2024-11-19 11:04:42.255054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5f60 is same with the state(6) to be set 00:37:04.558 [2024-11-19 11:04:42.255089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a5f60 is same with the state(6) to be set 00:37:04.558 Initializing NVMe Controllers 00:37:04.558 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:04.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:04.558 Initialization complete. Launching workers. 00:37:04.558 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 5218 00:37:04.558 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5505, failed to submit 33 00:37:04.558 success 5346, unsuccessful 159, failed 0 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:04.558 rmmod nvme_tcp 00:37:04.558 rmmod nvme_fabrics 00:37:04.558 rmmod nvme_keyring 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1281271 ']' 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1281271 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1281271 ']' 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1281271 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1281271 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1281271' 00:37:04.558 killing process with pid 1281271 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1281271 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1281271 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:04.558 11:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:05.503 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:05.503 00:37:05.503 real 0m33.865s 00:37:05.503 user 0m43.066s 00:37:05.503 sys 0m12.420s 00:37:05.503 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:05.503 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:05.503 ************************************ 00:37:05.503 END TEST nvmf_zcopy 00:37:05.503 ************************************ 00:37:05.503 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:37:05.503 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:05.503 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:05.503 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:05.503 ************************************ 00:37:05.503 START TEST nvmf_nmic 00:37:05.503 ************************************ 00:37:05.503 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:37:05.772 * Looking for test storage... 00:37:05.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:05.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:05.772 --rc genhtml_branch_coverage=1 00:37:05.772 --rc genhtml_function_coverage=1 00:37:05.772 --rc genhtml_legend=1 00:37:05.772 --rc geninfo_all_blocks=1 00:37:05.772 --rc geninfo_unexecuted_blocks=1 00:37:05.772 00:37:05.772 ' 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:05.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:05.772 --rc genhtml_branch_coverage=1 00:37:05.772 --rc genhtml_function_coverage=1 00:37:05.772 --rc genhtml_legend=1 00:37:05.772 --rc geninfo_all_blocks=1 00:37:05.772 --rc geninfo_unexecuted_blocks=1 00:37:05.772 00:37:05.772 ' 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:05.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:05.772 --rc genhtml_branch_coverage=1 00:37:05.772 --rc genhtml_function_coverage=1 00:37:05.772 --rc genhtml_legend=1 00:37:05.772 --rc geninfo_all_blocks=1 00:37:05.772 --rc geninfo_unexecuted_blocks=1 00:37:05.772 00:37:05.772 ' 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:05.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:05.772 --rc genhtml_branch_coverage=1 00:37:05.772 --rc genhtml_function_coverage=1 00:37:05.772 --rc genhtml_legend=1 00:37:05.772 --rc geninfo_all_blocks=1 00:37:05.772 --rc geninfo_unexecuted_blocks=1 00:37:05.772 00:37:05.772 ' 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.772 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:37:05.773 11:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:14.041 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:14.041 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.041 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:14.042 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:14.042 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:14.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:14.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:37:14.042 00:37:14.042 --- 10.0.0.2 ping statistics --- 00:37:14.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.042 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:14.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:14.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:37:14.042 00:37:14.042 --- 10.0.0.1 ping statistics --- 00:37:14.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.042 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1289904 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1289904 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1289904 ']' 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:14.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:14.042 11:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:14.042 [2024-11-19 11:04:52.473684] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:14.042 [2024-11-19 11:04:52.474804] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:37:14.042 [2024-11-19 11:04:52.474855] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:14.042 [2024-11-19 11:04:52.575660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:14.042 [2024-11-19 11:04:52.630421] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:14.042 [2024-11-19 11:04:52.630473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:14.042 [2024-11-19 11:04:52.630481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:14.042 [2024-11-19 11:04:52.630488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:14.042 [2024-11-19 11:04:52.630495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:14.042 [2024-11-19 11:04:52.632438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.042 [2024-11-19 11:04:52.632597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:14.042 [2024-11-19 11:04:52.632758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:14.042 [2024-11-19 11:04:52.632758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:14.042 [2024-11-19 11:04:52.709964] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:14.042 [2024-11-19 11:04:52.711095] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:14.042 [2024-11-19 11:04:52.711208] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:14.042 [2024-11-19 11:04:52.711690] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:14.042 [2024-11-19 11:04:52.711742] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:14.304 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:14.304 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:37:14.304 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:14.304 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:14.304 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:14.304 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:14.304 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:14.305 [2024-11-19 11:04:53.321670] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:14.305 Malloc0 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:14.305 [2024-11-19 11:04:53.413842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:37:14.305 test case1: single bdev can't be used in multiple subsystems 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:14.305 [2024-11-19 11:04:53.449242] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:37:14.305 [2024-11-19 11:04:53.449268] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:37:14.305 [2024-11-19 11:04:53.449277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:14.305 request: 00:37:14.305 { 00:37:14.305 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:37:14.305 "namespace": { 00:37:14.305 "bdev_name": "Malloc0", 00:37:14.305 "no_auto_visible": false 00:37:14.305 }, 00:37:14.305 "method": "nvmf_subsystem_add_ns", 00:37:14.305 "req_id": 1 00:37:14.305 } 00:37:14.305 Got JSON-RPC error response 00:37:14.305 response: 00:37:14.305 { 00:37:14.305 "code": -32602, 00:37:14.305 "message": "Invalid parameters" 00:37:14.305 } 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:37:14.305 Adding namespace failed - expected result. 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:37:14.305 test case2: host connect to nvmf target in multiple paths 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:14.305 [2024-11-19 11:04:53.461392] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.305 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:14.878 11:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:37:15.450 11:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:37:15.450 11:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:37:15.450 11:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:37:15.450 11:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:37:15.450 11:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:37:17.368 11:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:37:17.368 11:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:37:17.368 11:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:37:17.368 11:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:37:17.368 11:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:37:17.368 11:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:37:17.368 11:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:37:17.368 [global] 00:37:17.368 thread=1 00:37:17.368 invalidate=1 00:37:17.368 rw=write 00:37:17.368 time_based=1 00:37:17.368 runtime=1 00:37:17.368 ioengine=libaio 00:37:17.368 direct=1 00:37:17.368 bs=4096 00:37:17.368 iodepth=1 00:37:17.368 norandommap=0 00:37:17.368 numjobs=1 00:37:17.368 00:37:17.368 verify_dump=1 00:37:17.368 verify_backlog=512 00:37:17.368 verify_state_save=0 00:37:17.368 do_verify=1 00:37:17.368 verify=crc32c-intel 00:37:17.368 [job0] 00:37:17.368 filename=/dev/nvme0n1 00:37:17.368 Could not set queue depth (nvme0n1) 00:37:17.628 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:17.628 fio-3.35 00:37:17.628 Starting 1 thread 00:37:19.011 00:37:19.012 job0: (groupid=0, jobs=1): err= 0: pid=1291023: Tue Nov 19 11:04:57 2024 00:37:19.012 read: IOPS=17, BW=69.2KiB/s (70.9kB/s)(72.0KiB/1040msec) 00:37:19.012 slat (nsec): min=25329, max=25953, avg=25704.44, stdev=144.88 00:37:19.012 clat (usec): min=1050, max=42010, avg=39512.32, stdev=9606.10 00:37:19.012 lat (usec): min=1075, max=42036, avg=39538.03, stdev=9606.13 00:37:19.012 clat percentiles (usec): 00:37:19.012 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[41157], 20.00th=[41157], 00:37:19.012 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:37:19.012 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:19.012 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:19.012 | 99.99th=[42206] 00:37:19.012 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:37:19.012 slat (usec): min=9, max=28538, avg=86.55, stdev=1259.91 00:37:19.012 clat (usec): min=193, max=821, avg=546.56, stdev=108.50 00:37:19.012 lat (usec): min=204, max=29314, avg=633.10, stdev=1274.92 00:37:19.012 clat percentiles (usec): 00:37:19.012 | 1.00th=[ 306], 5.00th=[ 351], 10.00th=[ 408], 20.00th=[ 474], 00:37:19.012 | 30.00th=[ 490], 40.00th=[ 510], 50.00th=[ 537], 60.00th=[ 578], 00:37:19.012 | 70.00th=[ 611], 80.00th=[ 644], 90.00th=[ 693], 95.00th=[ 734], 00:37:19.012 | 99.00th=[ 766], 99.50th=[ 775], 99.90th=[ 824], 99.95th=[ 824], 00:37:19.012 | 99.99th=[ 824] 00:37:19.012 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:37:19.012 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:19.012 lat (usec) : 250=0.38%, 500=33.21%, 750=60.75%, 1000=2.26% 00:37:19.012 lat (msec) : 2=0.19%, 50=3.21% 00:37:19.012 cpu : usr=0.67%, sys=1.54%, ctx=533, majf=0, minf=1 00:37:19.012 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:19.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.012 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:19.012 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:19.012 00:37:19.012 Run status group 0 (all jobs): 00:37:19.012 READ: bw=69.2KiB/s (70.9kB/s), 69.2KiB/s-69.2KiB/s (70.9kB/s-70.9kB/s), io=72.0KiB (73.7kB), run=1040-1040msec 00:37:19.012 WRITE: bw=1969KiB/s (2016kB/s), 1969KiB/s-1969KiB/s (2016kB/s-2016kB/s), io=2048KiB (2097kB), run=1040-1040msec 00:37:19.012 00:37:19.012 Disk stats (read/write): 00:37:19.012 nvme0n1: ios=39/512, merge=0/0, ticks=1500/265, in_queue=1765, util=98.80% 00:37:19.012 11:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:19.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:19.012 rmmod nvme_tcp 00:37:19.012 rmmod nvme_fabrics 00:37:19.012 rmmod nvme_keyring 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1289904 ']' 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1289904 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1289904 ']' 00:37:19.012 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1289904 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1289904 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1289904' 00:37:19.273 killing process with pid 1289904 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1289904 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1289904 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:19.273 11:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:21.819 00:37:21.819 real 0m15.816s 00:37:21.819 user 0m36.384s 00:37:21.819 sys 0m7.418s 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:21.819 ************************************ 00:37:21.819 END TEST nvmf_nmic 00:37:21.819 ************************************ 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:21.819 ************************************ 00:37:21.819 START TEST nvmf_fio_target 00:37:21.819 ************************************ 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:37:21.819 * Looking for test storage... 00:37:21.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:21.819 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:21.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.820 --rc genhtml_branch_coverage=1 00:37:21.820 --rc genhtml_function_coverage=1 00:37:21.820 --rc genhtml_legend=1 00:37:21.820 --rc geninfo_all_blocks=1 00:37:21.820 --rc geninfo_unexecuted_blocks=1 00:37:21.820 00:37:21.820 ' 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:21.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.820 --rc genhtml_branch_coverage=1 00:37:21.820 --rc genhtml_function_coverage=1 00:37:21.820 --rc genhtml_legend=1 00:37:21.820 --rc geninfo_all_blocks=1 00:37:21.820 --rc geninfo_unexecuted_blocks=1 00:37:21.820 00:37:21.820 ' 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:21.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.820 --rc genhtml_branch_coverage=1 00:37:21.820 --rc genhtml_function_coverage=1 00:37:21.820 --rc genhtml_legend=1 00:37:21.820 --rc geninfo_all_blocks=1 00:37:21.820 --rc geninfo_unexecuted_blocks=1 00:37:21.820 00:37:21.820 ' 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:21.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.820 --rc genhtml_branch_coverage=1 00:37:21.820 --rc genhtml_function_coverage=1 00:37:21.820 --rc genhtml_legend=1 00:37:21.820 --rc geninfo_all_blocks=1 00:37:21.820 --rc geninfo_unexecuted_blocks=1 00:37:21.820 00:37:21.820 ' 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:21.820 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:21.821 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:21.821 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:37:21.821 11:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:29.960 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:29.961 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:29.961 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:29.961 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:29.961 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:29.961 11:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:29.961 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:29.961 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:29.961 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:29.961 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:29.961 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:29.961 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:29.961 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:29.961 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:29.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:29.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:37:29.961 00:37:29.961 --- 10.0.0.2 ping statistics --- 00:37:29.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:29.961 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:37:29.961 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:29.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:29.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:37:29.961 00:37:29.961 --- 10.0.0.1 ping statistics --- 00:37:29.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:29.962 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1295437 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1295437 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1295437 ']' 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:29.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:29.962 11:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:29.962 [2024-11-19 11:05:08.290788] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:29.962 [2024-11-19 11:05:08.291896] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:37:29.962 [2024-11-19 11:05:08.291947] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:29.962 [2024-11-19 11:05:08.391053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:29.962 [2024-11-19 11:05:08.443486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:29.962 [2024-11-19 11:05:08.443538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:29.962 [2024-11-19 11:05:08.443546] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:29.962 [2024-11-19 11:05:08.443554] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:29.962 [2024-11-19 11:05:08.443560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:29.962 [2024-11-19 11:05:08.445583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:29.962 [2024-11-19 11:05:08.445749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:29.962 [2024-11-19 11:05:08.445911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:29.962 [2024-11-19 11:05:08.445912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:29.962 [2024-11-19 11:05:08.522710] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:29.962 [2024-11-19 11:05:08.523808] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:29.962 [2024-11-19 11:05:08.523934] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:29.962 [2024-11-19 11:05:08.524362] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:29.962 [2024-11-19 11:05:08.524412] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:29.962 11:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:29.962 11:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:37:29.962 11:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:29.962 11:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:29.962 11:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:29.962 11:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:29.962 11:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:30.231 [2024-11-19 11:05:09.310938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:30.231 11:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:30.500 11:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:37:30.500 11:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:30.761 11:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:37:30.761 11:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:31.022 11:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:37:31.022 11:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:31.022 11:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:37:31.022 11:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:37:31.282 11:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:31.544 11:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:37:31.544 11:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:31.804 11:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:37:31.804 11:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:31.804 11:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:37:31.804 11:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:37:32.065 11:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:32.326 11:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:37:32.326 11:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:32.587 11:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:37:32.587 11:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:32.587 11:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:32.847 [2024-11-19 11:05:11.910853] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:32.847 11:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:37:33.107 11:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:37:33.367 11:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:33.628 11:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:37:33.628 11:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:37:33.628 11:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:37:33.628 11:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:37:33.628 11:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:37:33.628 11:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:37:36.174 11:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:37:36.174 11:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:37:36.174 11:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:37:36.174 11:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:37:36.174 11:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:37:36.174 11:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:37:36.174 11:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:37:36.174 [global] 00:37:36.174 thread=1 00:37:36.174 invalidate=1 00:37:36.174 rw=write 00:37:36.174 time_based=1 00:37:36.174 runtime=1 00:37:36.174 ioengine=libaio 00:37:36.174 direct=1 00:37:36.174 bs=4096 00:37:36.174 iodepth=1 00:37:36.174 norandommap=0 00:37:36.174 numjobs=1 00:37:36.174 00:37:36.174 verify_dump=1 00:37:36.174 verify_backlog=512 00:37:36.174 verify_state_save=0 00:37:36.174 do_verify=1 00:37:36.174 verify=crc32c-intel 00:37:36.174 [job0] 00:37:36.174 filename=/dev/nvme0n1 00:37:36.174 [job1] 00:37:36.174 filename=/dev/nvme0n2 00:37:36.174 [job2] 00:37:36.174 filename=/dev/nvme0n3 00:37:36.174 [job3] 00:37:36.174 filename=/dev/nvme0n4 00:37:36.174 Could not set queue depth (nvme0n1) 00:37:36.174 Could not set queue depth (nvme0n2) 00:37:36.174 Could not set queue depth (nvme0n3) 00:37:36.174 Could not set queue depth (nvme0n4) 00:37:36.174 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:36.174 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:36.174 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:36.174 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:36.174 fio-3.35 00:37:36.174 Starting 4 threads 00:37:37.575 00:37:37.575 job0: (groupid=0, jobs=1): err= 0: pid=1297012: Tue Nov 19 11:05:16 2024 00:37:37.575 read: IOPS=18, BW=73.6KiB/s (75.4kB/s)(76.0KiB/1032msec) 00:37:37.575 slat (nsec): min=26818, max=27550, avg=27100.11, stdev=209.32 00:37:37.575 clat (usec): min=40837, max=41959, avg=41118.00, stdev=366.07 00:37:37.575 lat (usec): min=40864, max=41987, avg=41145.10, stdev=366.07 00:37:37.575 clat percentiles (usec): 00:37:37.575 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:37:37.575 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:37.575 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:37:37.575 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:37.575 | 99.99th=[42206] 00:37:37.575 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:37:37.575 slat (nsec): min=9798, max=53756, avg=30449.38, stdev=10201.04 00:37:37.575 clat (usec): min=200, max=707, avg=450.20, stdev=84.23 00:37:37.575 lat (usec): min=235, max=743, avg=480.65, stdev=88.39 00:37:37.575 clat percentiles (usec): 00:37:37.575 | 1.00th=[ 269], 5.00th=[ 306], 10.00th=[ 338], 20.00th=[ 375], 00:37:37.575 | 30.00th=[ 412], 40.00th=[ 437], 50.00th=[ 457], 60.00th=[ 478], 00:37:37.575 | 70.00th=[ 494], 80.00th=[ 519], 90.00th=[ 553], 95.00th=[ 578], 00:37:37.575 | 99.00th=[ 644], 99.50th=[ 660], 99.90th=[ 709], 99.95th=[ 709], 00:37:37.575 | 99.99th=[ 709] 00:37:37.575 bw ( KiB/s): min= 4096, max= 4096, per=42.96%, avg=4096.00, stdev= 0.00, samples=1 00:37:37.575 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:37.575 lat (usec) : 250=0.19%, 500=71.37%, 750=24.86% 00:37:37.575 lat (msec) : 50=3.58% 00:37:37.575 cpu : usr=0.48%, sys=1.75%, ctx=533, majf=0, minf=1 00:37:37.575 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:37.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.575 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.575 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:37.575 job1: (groupid=0, jobs=1): err= 0: pid=1297022: Tue Nov 19 11:05:16 2024 00:37:37.575 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:37:37.575 slat (nsec): min=8204, max=54568, avg=26748.97, stdev=2767.07 00:37:37.575 clat (usec): min=671, max=1323, avg=1060.24, stdev=99.23 00:37:37.575 lat (usec): min=698, max=1349, avg=1086.99, stdev=99.06 00:37:37.575 clat percentiles (usec): 00:37:37.575 | 1.00th=[ 783], 5.00th=[ 881], 10.00th=[ 938], 20.00th=[ 988], 00:37:37.575 | 30.00th=[ 1020], 40.00th=[ 1045], 50.00th=[ 1057], 60.00th=[ 1090], 00:37:37.575 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1205], 00:37:37.575 | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1319], 99.95th=[ 1319], 00:37:37.575 | 99.99th=[ 1319] 00:37:37.575 write: IOPS=717, BW=2869KiB/s (2938kB/s)(2872KiB/1001msec); 0 zone resets 00:37:37.575 slat (nsec): min=9824, max=57593, avg=27747.30, stdev=11061.86 00:37:37.575 clat (usec): min=125, max=1021, avg=576.83, stdev=164.60 00:37:37.575 lat (usec): min=136, max=1055, avg=604.57, stdev=168.33 00:37:37.575 clat percentiles (usec): 00:37:37.576 | 1.00th=[ 247], 5.00th=[ 306], 10.00th=[ 351], 20.00th=[ 429], 00:37:37.576 | 30.00th=[ 482], 40.00th=[ 529], 50.00th=[ 578], 60.00th=[ 627], 00:37:37.576 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 783], 95.00th=[ 865], 00:37:37.576 | 99.00th=[ 930], 99.50th=[ 963], 99.90th=[ 1020], 99.95th=[ 1020], 00:37:37.576 | 99.99th=[ 1020] 00:37:37.576 bw ( KiB/s): min= 4096, max= 4096, per=42.96%, avg=4096.00, stdev= 0.00, samples=1 00:37:37.576 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:37.576 lat (usec) : 250=0.73%, 500=18.94%, 750=30.81%, 1000=17.97% 00:37:37.576 lat (msec) : 2=31.54% 00:37:37.576 cpu : usr=2.10%, sys=3.20%, ctx=1231, majf=0, minf=1 00:37:37.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:37.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.576 issued rwts: total=512,718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:37.576 job2: (groupid=0, jobs=1): err= 0: pid=1297023: Tue Nov 19 11:05:16 2024 00:37:37.576 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:37:37.576 slat (nsec): min=7210, max=63575, avg=28125.77, stdev=3137.07 00:37:37.576 clat (usec): min=582, max=1470, avg=1041.73, stdev=117.90 00:37:37.576 lat (usec): min=610, max=1498, avg=1069.86, stdev=117.80 00:37:37.576 clat percentiles (usec): 00:37:37.576 | 1.00th=[ 685], 5.00th=[ 807], 10.00th=[ 881], 20.00th=[ 971], 00:37:37.576 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1074], 00:37:37.576 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[ 1221], 00:37:37.576 | 99.00th=[ 1287], 99.50th=[ 1352], 99.90th=[ 1467], 99.95th=[ 1467], 00:37:37.576 | 99.99th=[ 1467] 00:37:37.576 write: IOPS=717, BW=2869KiB/s (2938kB/s)(2872KiB/1001msec); 0 zone resets 00:37:37.576 slat (nsec): min=9653, max=71670, avg=31613.87, stdev=10523.02 00:37:37.576 clat (usec): min=136, max=967, avg=586.02, stdev=150.55 00:37:37.576 lat (usec): min=146, max=1003, avg=617.64, stdev=155.23 00:37:37.576 clat percentiles (usec): 00:37:37.576 | 1.00th=[ 235], 5.00th=[ 293], 10.00th=[ 363], 20.00th=[ 478], 00:37:37.576 | 30.00th=[ 523], 40.00th=[ 562], 50.00th=[ 603], 60.00th=[ 635], 00:37:37.576 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 766], 95.00th=[ 807], 00:37:37.576 | 99.00th=[ 906], 99.50th=[ 930], 99.90th=[ 971], 99.95th=[ 971], 00:37:37.576 | 99.99th=[ 971] 00:37:37.576 bw ( KiB/s): min= 4096, max= 4096, per=42.96%, avg=4096.00, stdev= 0.00, samples=1 00:37:37.576 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:37.576 lat (usec) : 250=0.89%, 500=13.41%, 750=37.56%, 1000=18.21% 00:37:37.576 lat (msec) : 2=29.92% 00:37:37.576 cpu : usr=3.30%, sys=4.20%, ctx=1231, majf=0, minf=1 00:37:37.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:37.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.576 issued rwts: total=512,718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:37.576 job3: (groupid=0, jobs=1): err= 0: pid=1297024: Tue Nov 19 11:05:16 2024 00:37:37.576 read: IOPS=508, BW=2034KiB/s (2083kB/s)(2036KiB/1001msec) 00:37:37.576 slat (nsec): min=8189, max=45636, avg=26253.41, stdev=3168.38 00:37:37.576 clat (usec): min=764, max=41848, avg=1264.78, stdev=1807.41 00:37:37.576 lat (usec): min=790, max=41874, avg=1291.03, stdev=1807.40 00:37:37.576 clat percentiles (usec): 00:37:37.576 | 1.00th=[ 865], 5.00th=[ 963], 10.00th=[ 1020], 20.00th=[ 1074], 00:37:37.576 | 30.00th=[ 1123], 40.00th=[ 1156], 50.00th=[ 1188], 60.00th=[ 1221], 00:37:37.576 | 70.00th=[ 1254], 80.00th=[ 1303], 90.00th=[ 1352], 95.00th=[ 1418], 00:37:37.576 | 99.00th=[ 1516], 99.50th=[ 1582], 99.90th=[41681], 99.95th=[41681], 00:37:37.576 | 99.99th=[41681] 00:37:37.576 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:37:37.576 slat (nsec): min=9994, max=52082, avg=32039.14, stdev=8180.00 00:37:37.576 clat (usec): min=215, max=1042, avg=620.82, stdev=140.96 00:37:37.576 lat (usec): min=249, max=1076, avg=652.86, stdev=143.81 00:37:37.576 clat percentiles (usec): 00:37:37.576 | 1.00th=[ 265], 5.00th=[ 371], 10.00th=[ 429], 20.00th=[ 502], 00:37:37.576 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 660], 00:37:37.576 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 799], 95.00th=[ 840], 00:37:37.576 | 99.00th=[ 922], 99.50th=[ 988], 99.90th=[ 1045], 99.95th=[ 1045], 00:37:37.576 | 99.99th=[ 1045] 00:37:37.576 bw ( KiB/s): min= 4096, max= 4096, per=42.96%, avg=4096.00, stdev= 0.00, samples=1 00:37:37.576 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:37.576 lat (usec) : 250=0.20%, 500=9.40%, 750=32.03%, 1000=12.44% 00:37:37.576 lat (msec) : 2=45.84%, 50=0.10% 00:37:37.576 cpu : usr=1.50%, sys=3.10%, ctx=1022, majf=0, minf=1 00:37:37.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:37.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.576 issued rwts: total=509,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:37.576 00:37:37.576 Run status group 0 (all jobs): 00:37:37.576 READ: bw=6016KiB/s (6160kB/s), 73.6KiB/s-2046KiB/s (75.4kB/s-2095kB/s), io=6208KiB (6357kB), run=1001-1032msec 00:37:37.576 WRITE: bw=9535KiB/s (9764kB/s), 1984KiB/s-2869KiB/s (2032kB/s-2938kB/s), io=9840KiB (10.1MB), run=1001-1032msec 00:37:37.576 00:37:37.576 Disk stats (read/write): 00:37:37.576 nvme0n1: ios=39/512, merge=0/0, ticks=1544/232, in_queue=1776, util=96.59% 00:37:37.576 nvme0n2: ios=521/512, merge=0/0, ticks=681/292, in_queue=973, util=100.00% 00:37:37.576 nvme0n3: ios=534/512, merge=0/0, ticks=1092/231, in_queue=1323, util=96.83% 00:37:37.576 nvme0n4: ios=393/512, merge=0/0, ticks=1396/314, in_queue=1710, util=97.11% 00:37:37.576 11:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:37:37.576 [global] 00:37:37.576 thread=1 00:37:37.576 invalidate=1 00:37:37.576 rw=randwrite 00:37:37.576 time_based=1 00:37:37.576 runtime=1 00:37:37.576 ioengine=libaio 00:37:37.576 direct=1 00:37:37.576 bs=4096 00:37:37.576 iodepth=1 00:37:37.576 norandommap=0 00:37:37.576 numjobs=1 00:37:37.576 00:37:37.576 verify_dump=1 00:37:37.576 verify_backlog=512 00:37:37.576 verify_state_save=0 00:37:37.576 do_verify=1 00:37:37.576 verify=crc32c-intel 00:37:37.576 [job0] 00:37:37.576 filename=/dev/nvme0n1 00:37:37.576 [job1] 00:37:37.576 filename=/dev/nvme0n2 00:37:37.576 [job2] 00:37:37.576 filename=/dev/nvme0n3 00:37:37.576 [job3] 00:37:37.576 filename=/dev/nvme0n4 00:37:37.576 Could not set queue depth (nvme0n1) 00:37:37.576 Could not set queue depth (nvme0n2) 00:37:37.576 Could not set queue depth (nvme0n3) 00:37:37.576 Could not set queue depth (nvme0n4) 00:37:37.844 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:37.844 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:37.844 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:37.844 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:37.844 fio-3.35 00:37:37.844 Starting 4 threads 00:37:39.231 00:37:39.231 job0: (groupid=0, jobs=1): err= 0: pid=1297468: Tue Nov 19 11:05:18 2024 00:37:39.231 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:37:39.231 slat (nsec): min=25429, max=61125, avg=26351.50, stdev=2978.89 00:37:39.231 clat (usec): min=758, max=2303, avg=1018.47, stdev=95.11 00:37:39.231 lat (usec): min=784, max=2329, avg=1044.82, stdev=94.92 00:37:39.231 clat percentiles (usec): 00:37:39.231 | 1.00th=[ 807], 5.00th=[ 881], 10.00th=[ 914], 20.00th=[ 963], 00:37:39.231 | 30.00th=[ 988], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1037], 00:37:39.231 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1139], 00:37:39.231 | 99.00th=[ 1205], 99.50th=[ 1221], 99.90th=[ 2311], 99.95th=[ 2311], 00:37:39.231 | 99.99th=[ 2311] 00:37:39.231 write: IOPS=698, BW=2793KiB/s (2860kB/s)(2796KiB/1001msec); 0 zone resets 00:37:39.231 slat (usec): min=8, max=108, avg=28.67, stdev= 9.73 00:37:39.231 clat (usec): min=227, max=890, avg=623.42, stdev=112.09 00:37:39.231 lat (usec): min=236, max=922, avg=652.09, stdev=116.71 00:37:39.231 clat percentiles (usec): 00:37:39.231 | 1.00th=[ 351], 5.00th=[ 408], 10.00th=[ 474], 20.00th=[ 537], 00:37:39.231 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:37:39.231 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 791], 00:37:39.231 | 99.00th=[ 848], 99.50th=[ 857], 99.90th=[ 889], 99.95th=[ 889], 00:37:39.231 | 99.99th=[ 889] 00:37:39.231 bw ( KiB/s): min= 4096, max= 4096, per=33.41%, avg=4096.00, stdev= 0.00, samples=1 00:37:39.231 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:39.231 lat (usec) : 250=0.08%, 500=8.51%, 750=42.11%, 1000=22.13% 00:37:39.231 lat (msec) : 2=27.09%, 4=0.08% 00:37:39.231 cpu : usr=2.00%, sys=5.00%, ctx=1212, majf=0, minf=1 00:37:39.231 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:39.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.231 issued rwts: total=512,699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.231 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:39.231 job1: (groupid=0, jobs=1): err= 0: pid=1297489: Tue Nov 19 11:05:18 2024 00:37:39.231 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:37:39.231 slat (nsec): min=6320, max=45501, avg=23528.84, stdev=7455.69 00:37:39.231 clat (usec): min=300, max=1319, avg=823.65, stdev=174.01 00:37:39.231 lat (usec): min=307, max=1345, avg=847.18, stdev=175.86 00:37:39.231 clat percentiles (usec): 00:37:39.231 | 1.00th=[ 433], 5.00th=[ 545], 10.00th=[ 619], 20.00th=[ 668], 00:37:39.231 | 30.00th=[ 734], 40.00th=[ 783], 50.00th=[ 824], 60.00th=[ 857], 00:37:39.231 | 70.00th=[ 898], 80.00th=[ 955], 90.00th=[ 1090], 95.00th=[ 1123], 00:37:39.231 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1319], 99.95th=[ 1319], 00:37:39.231 | 99.99th=[ 1319] 00:37:39.231 write: IOPS=872, BW=3489KiB/s (3572kB/s)(3492KiB/1001msec); 0 zone resets 00:37:39.231 slat (nsec): min=8555, max=66104, avg=26694.33, stdev=9567.06 00:37:39.231 clat (usec): min=197, max=983, avg=610.82, stdev=140.74 00:37:39.231 lat (usec): min=222, max=1015, avg=637.52, stdev=145.42 00:37:39.231 clat percentiles (usec): 00:37:39.231 | 1.00th=[ 235], 5.00th=[ 355], 10.00th=[ 408], 20.00th=[ 490], 00:37:39.231 | 30.00th=[ 545], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 660], 00:37:39.231 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 766], 95.00th=[ 807], 00:37:39.231 | 99.00th=[ 906], 99.50th=[ 938], 99.90th=[ 988], 99.95th=[ 988], 00:37:39.231 | 99.99th=[ 988] 00:37:39.231 bw ( KiB/s): min= 4096, max= 4096, per=33.41%, avg=4096.00, stdev= 0.00, samples=1 00:37:39.231 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:39.231 lat (usec) : 250=0.94%, 500=13.57%, 750=51.48%, 1000=27.80% 00:37:39.231 lat (msec) : 2=6.21% 00:37:39.231 cpu : usr=2.80%, sys=3.90%, ctx=1385, majf=0, minf=2 00:37:39.231 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:39.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.231 issued rwts: total=512,873,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.231 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:39.231 job2: (groupid=0, jobs=1): err= 0: pid=1297516: Tue Nov 19 11:05:18 2024 00:37:39.231 read: IOPS=665, BW=2661KiB/s (2725kB/s)(2664KiB/1001msec) 00:37:39.231 slat (nsec): min=6360, max=59254, avg=23905.21, stdev=8097.23 00:37:39.231 clat (usec): min=235, max=2345, avg=686.17, stdev=135.22 00:37:39.231 lat (usec): min=243, max=2372, avg=710.08, stdev=137.43 00:37:39.231 clat percentiles (usec): 00:37:39.231 | 1.00th=[ 445], 5.00th=[ 494], 10.00th=[ 529], 20.00th=[ 570], 00:37:39.231 | 30.00th=[ 603], 40.00th=[ 652], 50.00th=[ 693], 60.00th=[ 725], 00:37:39.231 | 70.00th=[ 766], 80.00th=[ 799], 90.00th=[ 832], 95.00th=[ 857], 00:37:39.231 | 99.00th=[ 914], 99.50th=[ 963], 99.90th=[ 2343], 99.95th=[ 2343], 00:37:39.231 | 99.99th=[ 2343] 00:37:39.231 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:37:39.231 slat (nsec): min=8819, max=65724, avg=30431.19, stdev=8237.34 00:37:39.231 clat (usec): min=139, max=815, avg=471.96, stdev=126.18 00:37:39.231 lat (usec): min=172, max=847, avg=502.39, stdev=128.12 00:37:39.231 clat percentiles (usec): 00:37:39.231 | 1.00th=[ 229], 5.00th=[ 269], 10.00th=[ 302], 20.00th=[ 359], 00:37:39.231 | 30.00th=[ 388], 40.00th=[ 420], 50.00th=[ 478], 60.00th=[ 510], 00:37:39.231 | 70.00th=[ 545], 80.00th=[ 594], 90.00th=[ 644], 95.00th=[ 668], 00:37:39.231 | 99.00th=[ 725], 99.50th=[ 766], 99.90th=[ 799], 99.95th=[ 816], 00:37:39.231 | 99.99th=[ 816] 00:37:39.231 bw ( KiB/s): min= 4096, max= 4096, per=33.41%, avg=4096.00, stdev= 0.00, samples=1 00:37:39.231 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:39.231 lat (usec) : 250=1.60%, 500=35.21%, 750=49.17%, 1000=13.96% 00:37:39.231 lat (msec) : 4=0.06% 00:37:39.231 cpu : usr=2.80%, sys=6.80%, ctx=1690, majf=0, minf=1 00:37:39.231 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:39.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.231 issued rwts: total=666,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.231 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:39.231 job3: (groupid=0, jobs=1): err= 0: pid=1297527: Tue Nov 19 11:05:18 2024 00:37:39.231 read: IOPS=15, BW=63.1KiB/s (64.6kB/s)(64.0KiB/1014msec) 00:37:39.231 slat (nsec): min=27290, max=32389, avg=28095.19, stdev=1183.43 00:37:39.231 clat (usec): min=41528, max=42154, avg=41929.34, stdev=133.14 00:37:39.231 lat (usec): min=41556, max=42182, avg=41957.44, stdev=133.24 00:37:39.231 clat percentiles (usec): 00:37:39.231 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:37:39.231 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:37:39.231 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:39.231 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:39.231 | 99.99th=[42206] 00:37:39.231 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:37:39.231 slat (nsec): min=9340, max=57742, avg=31320.84, stdev=9779.78 00:37:39.231 clat (usec): min=164, max=982, avg=629.32, stdev=119.84 00:37:39.231 lat (usec): min=175, max=1016, avg=660.64, stdev=123.75 00:37:39.231 clat percentiles (usec): 00:37:39.231 | 1.00th=[ 351], 5.00th=[ 412], 10.00th=[ 474], 20.00th=[ 529], 00:37:39.231 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 668], 00:37:39.231 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 775], 95.00th=[ 807], 00:37:39.231 | 99.00th=[ 889], 99.50th=[ 906], 99.90th=[ 979], 99.95th=[ 979], 00:37:39.231 | 99.99th=[ 979] 00:37:39.231 bw ( KiB/s): min= 4096, max= 4096, per=33.41%, avg=4096.00, stdev= 0.00, samples=1 00:37:39.231 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:39.231 lat (usec) : 250=0.19%, 500=14.20%, 750=68.37%, 1000=14.20% 00:37:39.231 lat (msec) : 50=3.03% 00:37:39.231 cpu : usr=1.09%, sys=1.97%, ctx=532, majf=0, minf=1 00:37:39.231 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:39.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.232 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.232 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:39.232 00:37:39.232 Run status group 0 (all jobs): 00:37:39.232 READ: bw=6730KiB/s (6891kB/s), 63.1KiB/s-2661KiB/s (64.6kB/s-2725kB/s), io=6824KiB (6988kB), run=1001-1014msec 00:37:39.232 WRITE: bw=12.0MiB/s (12.6MB/s), 2020KiB/s-4092KiB/s (2068kB/s-4190kB/s), io=12.1MiB (12.7MB), run=1001-1014msec 00:37:39.232 00:37:39.232 Disk stats (read/write): 00:37:39.232 nvme0n1: ios=475/512, merge=0/0, ticks=458/254, in_queue=712, util=82.97% 00:37:39.232 nvme0n2: ios=540/525, merge=0/0, ticks=435/322, in_queue=757, util=83.26% 00:37:39.232 nvme0n3: ios=518/850, merge=0/0, ticks=310/296, in_queue=606, util=86.94% 00:37:39.232 nvme0n4: ios=44/512, merge=0/0, ticks=1540/255, in_queue=1795, util=97.19% 00:37:39.232 11:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:37:39.232 [global] 00:37:39.232 thread=1 00:37:39.232 invalidate=1 00:37:39.232 rw=write 00:37:39.232 time_based=1 00:37:39.232 runtime=1 00:37:39.232 ioengine=libaio 00:37:39.232 direct=1 00:37:39.232 bs=4096 00:37:39.232 iodepth=128 00:37:39.232 norandommap=0 00:37:39.232 numjobs=1 00:37:39.232 00:37:39.232 verify_dump=1 00:37:39.232 verify_backlog=512 00:37:39.232 verify_state_save=0 00:37:39.232 do_verify=1 00:37:39.232 verify=crc32c-intel 00:37:39.232 [job0] 00:37:39.232 filename=/dev/nvme0n1 00:37:39.232 [job1] 00:37:39.232 filename=/dev/nvme0n2 00:37:39.232 [job2] 00:37:39.232 filename=/dev/nvme0n3 00:37:39.232 [job3] 00:37:39.232 filename=/dev/nvme0n4 00:37:39.232 Could not set queue depth (nvme0n1) 00:37:39.232 Could not set queue depth (nvme0n2) 00:37:39.232 Could not set queue depth (nvme0n3) 00:37:39.232 Could not set queue depth (nvme0n4) 00:37:39.493 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:39.493 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:39.493 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:39.493 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:39.493 fio-3.35 00:37:39.493 Starting 4 threads 00:37:40.879 00:37:40.879 job0: (groupid=0, jobs=1): err= 0: pid=1297943: Tue Nov 19 11:05:19 2024 00:37:40.879 read: IOPS=7083, BW=27.7MiB/s (29.0MB/s)(28.0MiB/1012msec) 00:37:40.879 slat (nsec): min=891, max=11985k, avg=68708.37, stdev=564196.67 00:37:40.879 clat (usec): min=2490, max=27908, avg=9306.92, stdev=3941.94 00:37:40.879 lat (usec): min=2500, max=27932, avg=9375.63, stdev=3988.47 00:37:40.879 clat percentiles (usec): 00:37:40.879 | 1.00th=[ 3064], 5.00th=[ 4752], 10.00th=[ 5669], 20.00th=[ 6325], 00:37:40.879 | 30.00th=[ 6980], 40.00th=[ 7635], 50.00th=[ 8160], 60.00th=[ 8586], 00:37:40.879 | 70.00th=[10028], 80.00th=[12125], 90.00th=[15139], 95.00th=[18220], 00:37:40.879 | 99.00th=[20579], 99.50th=[22676], 99.90th=[24773], 99.95th=[24773], 00:37:40.879 | 99.99th=[27919] 00:37:40.879 write: IOPS=7223, BW=28.2MiB/s (29.6MB/s)(28.6MiB/1012msec); 0 zone resets 00:37:40.879 slat (nsec): min=1532, max=11302k, avg=63935.30, stdev=485652.27 00:37:40.879 clat (usec): min=828, max=26481, avg=8430.50, stdev=4086.62 00:37:40.879 lat (usec): min=1037, max=26489, avg=8494.44, stdev=4114.63 00:37:40.879 clat percentiles (usec): 00:37:40.879 | 1.00th=[ 3032], 5.00th=[ 3884], 10.00th=[ 4817], 20.00th=[ 5342], 00:37:40.879 | 30.00th=[ 5997], 40.00th=[ 6718], 50.00th=[ 7439], 60.00th=[ 8225], 00:37:40.879 | 70.00th=[ 8848], 80.00th=[10945], 90.00th=[14353], 95.00th=[17695], 00:37:40.879 | 99.00th=[21365], 99.50th=[22414], 99.90th=[25822], 99.95th=[25822], 00:37:40.879 | 99.99th=[26608] 00:37:40.880 bw ( KiB/s): min=24688, max=32768, per=30.06%, avg=28728.00, stdev=5713.42, samples=2 00:37:40.880 iops : min= 6172, max= 8192, avg=7182.00, stdev=1428.36, samples=2 00:37:40.880 lat (usec) : 1000=0.01% 00:37:40.880 lat (msec) : 2=0.13%, 4=3.52%, 10=69.94%, 20=23.80%, 50=2.60% 00:37:40.880 cpu : usr=3.76%, sys=6.33%, ctx=400, majf=0, minf=2 00:37:40.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:37:40.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:40.880 issued rwts: total=7168,7310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.880 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:40.880 job1: (groupid=0, jobs=1): err= 0: pid=1297958: Tue Nov 19 11:05:19 2024 00:37:40.880 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:37:40.880 slat (nsec): min=874, max=28403k, avg=72865.54, stdev=768556.80 00:37:40.880 clat (usec): min=1445, max=80127, avg=9964.09, stdev=10847.36 00:37:40.880 lat (usec): min=1484, max=80133, avg=10036.96, stdev=10916.41 00:37:40.880 clat percentiles (usec): 00:37:40.880 | 1.00th=[ 2245], 5.00th=[ 4047], 10.00th=[ 4948], 20.00th=[ 5669], 00:37:40.880 | 30.00th=[ 6259], 40.00th=[ 6783], 50.00th=[ 7308], 60.00th=[ 8029], 00:37:40.880 | 70.00th=[ 8586], 80.00th=[ 9896], 90.00th=[13566], 95.00th=[21890], 00:37:40.880 | 99.00th=[66847], 99.50th=[70779], 99.90th=[80217], 99.95th=[80217], 00:37:40.880 | 99.99th=[80217] 00:37:40.880 write: IOPS=6985, BW=27.3MiB/s (28.6MB/s)(27.3MiB/1002msec); 0 zone resets 00:37:40.880 slat (nsec): min=1546, max=15161k, avg=59571.68, stdev=482691.24 00:37:40.880 clat (usec): min=1302, max=58506, avg=8703.99, stdev=7065.75 00:37:40.880 lat (usec): min=1311, max=58509, avg=8763.56, stdev=7091.69 00:37:40.880 clat percentiles (usec): 00:37:40.880 | 1.00th=[ 1844], 5.00th=[ 3392], 10.00th=[ 4178], 20.00th=[ 4883], 00:37:40.880 | 30.00th=[ 5276], 40.00th=[ 5932], 50.00th=[ 6718], 60.00th=[ 7373], 00:37:40.880 | 70.00th=[ 8094], 80.00th=[10028], 90.00th=[15139], 95.00th=[27132], 00:37:40.880 | 99.00th=[34341], 99.50th=[44827], 99.90th=[45876], 99.95th=[58459], 00:37:40.880 | 99.99th=[58459] 00:37:40.880 bw ( KiB/s): min=19760, max=35216, per=28.76%, avg=27488.00, stdev=10929.04, samples=2 00:37:40.880 iops : min= 4940, max= 8804, avg=6872.00, stdev=2732.26, samples=2 00:37:40.880 lat (msec) : 2=0.96%, 4=5.98%, 10=73.31%, 20=13.53%, 50=4.60% 00:37:40.880 lat (msec) : 100=1.62% 00:37:40.880 cpu : usr=4.60%, sys=7.29%, ctx=437, majf=0, minf=1 00:37:40.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:37:40.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:40.880 issued rwts: total=6656,6999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.880 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:40.880 job2: (groupid=0, jobs=1): err= 0: pid=1297975: Tue Nov 19 11:05:19 2024 00:37:40.880 read: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec) 00:37:40.880 slat (nsec): min=960, max=16738k, avg=86292.64, stdev=777587.49 00:37:40.880 clat (usec): min=2879, max=37140, avg=11478.90, stdev=4481.77 00:37:40.880 lat (usec): min=2893, max=37147, avg=11565.19, stdev=4534.93 00:37:40.880 clat percentiles (usec): 00:37:40.880 | 1.00th=[ 4359], 5.00th=[ 6652], 10.00th=[ 7111], 20.00th=[ 7963], 00:37:40.880 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[11207], 00:37:40.880 | 70.00th=[13566], 80.00th=[15533], 90.00th=[17433], 95.00th=[18744], 00:37:40.880 | 99.00th=[26346], 99.50th=[27132], 99.90th=[36963], 99.95th=[36963], 00:37:40.880 | 99.99th=[36963] 00:37:40.880 write: IOPS=5346, BW=20.9MiB/s (21.9MB/s)(21.1MiB/1011msec); 0 zone resets 00:37:40.880 slat (nsec): min=1611, max=16143k, avg=92000.12, stdev=650716.04 00:37:40.880 clat (usec): min=478, max=78541, avg=12873.86, stdev=10135.96 00:37:40.880 lat (usec): min=511, max=78545, avg=12965.86, stdev=10205.12 00:37:40.880 clat percentiles (usec): 00:37:40.880 | 1.00th=[ 2147], 5.00th=[ 4228], 10.00th=[ 6063], 20.00th=[ 7439], 00:37:40.880 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[11469], 00:37:40.880 | 70.00th=[13566], 80.00th=[16712], 90.00th=[20579], 95.00th=[26084], 00:37:40.880 | 99.00th=[67634], 99.50th=[72877], 99.90th=[78119], 99.95th=[78119], 00:37:40.880 | 99.99th=[78119] 00:37:40.880 bw ( KiB/s): min=20280, max=21936, per=22.08%, avg=21108.00, stdev=1170.97, samples=2 00:37:40.880 iops : min= 5070, max= 5484, avg=5277.00, stdev=292.74, samples=2 00:37:40.880 lat (usec) : 500=0.01%, 750=0.03% 00:37:40.880 lat (msec) : 2=0.42%, 4=1.88%, 10=50.40%, 20=39.48%, 50=6.65% 00:37:40.880 lat (msec) : 100=1.13% 00:37:40.880 cpu : usr=3.86%, sys=4.85%, ctx=410, majf=0, minf=1 00:37:40.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:37:40.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:40.880 issued rwts: total=5120,5405,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.880 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:40.880 job3: (groupid=0, jobs=1): err= 0: pid=1297982: Tue Nov 19 11:05:19 2024 00:37:40.880 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:37:40.880 slat (nsec): min=991, max=50992k, avg=117811.88, stdev=1198372.75 00:37:40.880 clat (usec): min=1992, max=79974, avg=14726.08, stdev=12406.57 00:37:40.880 lat (usec): min=1998, max=79980, avg=14843.89, stdev=12507.54 00:37:40.880 clat percentiles (usec): 00:37:40.880 | 1.00th=[ 2802], 5.00th=[ 5342], 10.00th=[ 6456], 20.00th=[ 8356], 00:37:40.880 | 30.00th=[ 8848], 40.00th=[10028], 50.00th=[10945], 60.00th=[13435], 00:37:40.880 | 70.00th=[15270], 80.00th=[16909], 90.00th=[22414], 95.00th=[38536], 00:37:40.880 | 99.00th=[74974], 99.50th=[79168], 99.90th=[80217], 99.95th=[80217], 00:37:40.880 | 99.99th=[80217] 00:37:40.880 write: IOPS=4449, BW=17.4MiB/s (18.2MB/s)(17.4MiB/1004msec); 0 zone resets 00:37:40.880 slat (nsec): min=1598, max=20188k, avg=98561.38, stdev=777741.98 00:37:40.880 clat (usec): min=746, max=71251, avg=15016.21, stdev=11834.45 00:37:40.880 lat (usec): min=761, max=71274, avg=15114.78, stdev=11913.36 00:37:40.880 clat percentiles (usec): 00:37:40.880 | 1.00th=[ 1434], 5.00th=[ 4686], 10.00th=[ 5800], 20.00th=[ 7504], 00:37:40.880 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[13042], 00:37:40.880 | 70.00th=[16319], 80.00th=[21103], 90.00th=[28443], 95.00th=[40633], 00:37:40.880 | 99.00th=[60031], 99.50th=[61080], 99.90th=[69731], 99.95th=[69731], 00:37:40.880 | 99.99th=[70779] 00:37:40.880 bw ( KiB/s): min=14232, max=20480, per=18.16%, avg=17356.00, stdev=4418.00, samples=2 00:37:40.880 iops : min= 3558, max= 5120, avg=4339.00, stdev=1104.50, samples=2 00:37:40.880 lat (usec) : 750=0.01%, 1000=0.06% 00:37:40.880 lat (msec) : 2=1.16%, 4=2.44%, 10=39.96%, 20=39.72%, 50=12.93% 00:37:40.880 lat (msec) : 100=3.73% 00:37:40.880 cpu : usr=3.39%, sys=4.19%, ctx=354, majf=0, minf=2 00:37:40.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:37:40.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:40.880 issued rwts: total=4096,4467,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.880 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:40.880 00:37:40.880 Run status group 0 (all jobs): 00:37:40.880 READ: bw=88.9MiB/s (93.3MB/s), 15.9MiB/s-27.7MiB/s (16.7MB/s-29.0MB/s), io=90.0MiB (94.4MB), run=1002-1012msec 00:37:40.880 WRITE: bw=93.3MiB/s (97.9MB/s), 17.4MiB/s-28.2MiB/s (18.2MB/s-29.6MB/s), io=94.5MiB (99.0MB), run=1002-1012msec 00:37:40.880 00:37:40.880 Disk stats (read/write): 00:37:40.880 nvme0n1: ios=6194/6247, merge=0/0, ticks=43629/38782, in_queue=82411, util=91.48% 00:37:40.880 nvme0n2: ios=5031/5120, merge=0/0, ticks=38801/37193, in_queue=75994, util=88.28% 00:37:40.880 nvme0n3: ios=4608/4679, merge=0/0, ticks=51187/48771, in_queue=99958, util=88.29% 00:37:40.880 nvme0n4: ios=3845/4096, merge=0/0, ticks=41939/48463, in_queue=90402, util=89.42% 00:37:40.880 11:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:37:40.880 [global] 00:37:40.880 thread=1 00:37:40.880 invalidate=1 00:37:40.880 rw=randwrite 00:37:40.880 time_based=1 00:37:40.880 runtime=1 00:37:40.880 ioengine=libaio 00:37:40.880 direct=1 00:37:40.880 bs=4096 00:37:40.880 iodepth=128 00:37:40.880 norandommap=0 00:37:40.880 numjobs=1 00:37:40.880 00:37:40.880 verify_dump=1 00:37:40.880 verify_backlog=512 00:37:40.880 verify_state_save=0 00:37:40.880 do_verify=1 00:37:40.880 verify=crc32c-intel 00:37:40.880 [job0] 00:37:40.880 filename=/dev/nvme0n1 00:37:40.880 [job1] 00:37:40.880 filename=/dev/nvme0n2 00:37:40.880 [job2] 00:37:40.880 filename=/dev/nvme0n3 00:37:40.880 [job3] 00:37:40.880 filename=/dev/nvme0n4 00:37:40.880 Could not set queue depth (nvme0n1) 00:37:40.880 Could not set queue depth (nvme0n2) 00:37:40.880 Could not set queue depth (nvme0n3) 00:37:40.880 Could not set queue depth (nvme0n4) 00:37:41.141 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:41.141 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:41.141 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:41.141 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:41.141 fio-3.35 00:37:41.141 Starting 4 threads 00:37:42.529 00:37:42.529 job0: (groupid=0, jobs=1): err= 0: pid=1298407: Tue Nov 19 11:05:21 2024 00:37:42.529 read: IOPS=7104, BW=27.8MiB/s (29.1MB/s)(28.0MiB/1009msec) 00:37:42.529 slat (nsec): min=878, max=16050k, avg=71060.15, stdev=478599.81 00:37:42.529 clat (usec): min=1611, max=32548, avg=9204.82, stdev=2613.29 00:37:42.529 lat (usec): min=1622, max=32559, avg=9275.88, stdev=2643.77 00:37:42.529 clat percentiles (usec): 00:37:42.529 | 1.00th=[ 5145], 5.00th=[ 7111], 10.00th=[ 7570], 20.00th=[ 8029], 00:37:42.529 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8717], 00:37:42.529 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[11731], 95.00th=[16450], 00:37:42.529 | 99.00th=[18744], 99.50th=[19530], 99.90th=[21365], 99.95th=[22676], 00:37:42.529 | 99.99th=[32637] 00:37:42.529 write: IOPS=7276, BW=28.4MiB/s (29.8MB/s)(28.7MiB/1009msec); 0 zone resets 00:37:42.529 slat (nsec): min=1474, max=7505.2k, avg=63067.23, stdev=306160.76 00:37:42.529 clat (usec): min=1156, max=31891, avg=8439.85, stdev=2938.47 00:37:42.529 lat (usec): min=1165, max=31898, avg=8502.92, stdev=2944.60 00:37:42.529 clat percentiles (usec): 00:37:42.529 | 1.00th=[ 2343], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7111], 00:37:42.529 | 30.00th=[ 7308], 40.00th=[ 7504], 50.00th=[ 7832], 60.00th=[ 8291], 00:37:42.529 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[ 9896], 95.00th=[12387], 00:37:42.529 | 99.00th=[21890], 99.50th=[26608], 99.90th=[30802], 99.95th=[31327], 00:37:42.529 | 99.99th=[31851] 00:37:42.529 bw ( KiB/s): min=24944, max=32768, per=28.94%, avg=28856.00, stdev=5532.40, samples=2 00:37:42.529 iops : min= 6236, max= 8192, avg=7214.00, stdev=1383.10, samples=2 00:37:42.529 lat (msec) : 2=0.58%, 4=0.71%, 10=88.15%, 20=9.82%, 50=0.74% 00:37:42.529 cpu : usr=3.37%, sys=4.66%, ctx=1014, majf=0, minf=2 00:37:42.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:37:42.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:42.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:42.529 issued rwts: total=7168,7342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:42.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:42.529 job1: (groupid=0, jobs=1): err= 0: pid=1298420: Tue Nov 19 11:05:21 2024 00:37:42.529 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:37:42.529 slat (nsec): min=1289, max=14631k, avg=81287.67, stdev=702204.46 00:37:42.529 clat (usec): min=1807, max=33275, avg=11134.58, stdev=5003.02 00:37:42.529 lat (usec): min=1817, max=33283, avg=11215.87, stdev=5054.63 00:37:42.529 clat percentiles (usec): 00:37:42.529 | 1.00th=[ 4015], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 7504], 00:37:42.529 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 9110], 60.00th=[10945], 00:37:42.529 | 70.00th=[12518], 80.00th=[15401], 90.00th=[18744], 95.00th=[21890], 00:37:42.529 | 99.00th=[25822], 99.50th=[26346], 99.90th=[33162], 99.95th=[33162], 00:37:42.529 | 99.99th=[33162] 00:37:42.529 write: IOPS=6054, BW=23.7MiB/s (24.8MB/s)(23.7MiB/1003msec); 0 zone resets 00:37:42.529 slat (nsec): min=1631, max=13701k, avg=77212.00, stdev=632974.04 00:37:42.529 clat (usec): min=1223, max=41638, avg=10639.24, stdev=6802.88 00:37:42.529 lat (usec): min=1233, max=41648, avg=10716.46, stdev=6848.71 00:37:42.529 clat percentiles (usec): 00:37:42.529 | 1.00th=[ 3851], 5.00th=[ 4948], 10.00th=[ 5669], 20.00th=[ 5997], 00:37:42.529 | 30.00th=[ 6521], 40.00th=[ 7177], 50.00th=[ 7963], 60.00th=[10159], 00:37:42.529 | 70.00th=[11731], 80.00th=[13698], 90.00th=[18482], 95.00th=[23462], 00:37:42.529 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:37:42.529 | 99.99th=[41681] 00:37:42.529 bw ( KiB/s): min=22984, max=24576, per=23.85%, avg=23780.00, stdev=1125.71, samples=2 00:37:42.529 iops : min= 5746, max= 6144, avg=5945.00, stdev=281.43, samples=2 00:37:42.529 lat (msec) : 2=0.28%, 4=0.79%, 10=55.22%, 20=35.74%, 50=7.96% 00:37:42.529 cpu : usr=4.79%, sys=6.89%, ctx=287, majf=0, minf=1 00:37:42.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:37:42.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:42.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:42.529 issued rwts: total=5632,6073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:42.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:42.529 job2: (groupid=0, jobs=1): err= 0: pid=1298437: Tue Nov 19 11:05:21 2024 00:37:42.529 read: IOPS=7464, BW=29.2MiB/s (30.6MB/s)(29.3MiB/1004msec) 00:37:42.529 slat (nsec): min=943, max=14362k, avg=68842.56, stdev=566007.21 00:37:42.529 clat (usec): min=2924, max=31033, avg=9040.82, stdev=2708.35 00:37:42.529 lat (usec): min=3182, max=31061, avg=9109.66, stdev=2750.82 00:37:42.529 clat percentiles (usec): 00:37:42.529 | 1.00th=[ 4424], 5.00th=[ 6194], 10.00th=[ 6783], 20.00th=[ 7046], 00:37:42.529 | 30.00th=[ 7373], 40.00th=[ 7701], 50.00th=[ 8094], 60.00th=[ 8848], 00:37:42.529 | 70.00th=[ 9896], 80.00th=[11338], 90.00th=[12649], 95.00th=[13435], 00:37:42.529 | 99.00th=[20055], 99.50th=[20055], 99.90th=[21103], 99.95th=[21103], 00:37:42.529 | 99.99th=[31065] 00:37:42.529 write: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1004msec); 0 zone resets 00:37:42.529 slat (nsec): min=1587, max=12696k, avg=57250.34, stdev=466245.20 00:37:42.529 clat (usec): min=1426, max=20246, avg=7764.57, stdev=2318.04 00:37:42.529 lat (usec): min=1435, max=20255, avg=7821.82, stdev=2331.87 00:37:42.529 clat percentiles (usec): 00:37:42.529 | 1.00th=[ 3163], 5.00th=[ 4883], 10.00th=[ 5080], 20.00th=[ 6063], 00:37:42.529 | 30.00th=[ 6652], 40.00th=[ 7111], 50.00th=[ 7504], 60.00th=[ 7898], 00:37:42.529 | 70.00th=[ 8225], 80.00th=[ 9241], 90.00th=[10814], 95.00th=[12125], 00:37:42.529 | 99.00th=[13960], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:37:42.529 | 99.99th=[20317] 00:37:42.529 bw ( KiB/s): min=28688, max=32752, per=30.81%, avg=30720.00, stdev=2873.68, samples=2 00:37:42.529 iops : min= 7172, max= 8188, avg=7680.00, stdev=718.42, samples=2 00:37:42.529 lat (msec) : 2=0.10%, 4=1.04%, 10=78.10%, 20=20.34%, 50=0.42% 00:37:42.529 cpu : usr=4.99%, sys=8.37%, ctx=364, majf=0, minf=1 00:37:42.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:37:42.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:42.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:42.529 issued rwts: total=7494,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:42.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:42.529 job3: (groupid=0, jobs=1): err= 0: pid=1298444: Tue Nov 19 11:05:21 2024 00:37:42.529 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:37:42.529 slat (nsec): min=1002, max=16995k, avg=104172.01, stdev=854686.79 00:37:42.529 clat (usec): min=3603, max=30679, avg=13913.74, stdev=4512.72 00:37:42.529 lat (usec): min=3610, max=31932, avg=14017.91, stdev=4576.40 00:37:42.529 clat percentiles (usec): 00:37:42.529 | 1.00th=[ 6587], 5.00th=[ 8094], 10.00th=[ 8717], 20.00th=[ 9372], 00:37:42.529 | 30.00th=[11338], 40.00th=[12649], 50.00th=[13304], 60.00th=[14484], 00:37:42.529 | 70.00th=[16057], 80.00th=[17433], 90.00th=[19530], 95.00th=[21365], 00:37:42.529 | 99.00th=[27657], 99.50th=[27657], 99.90th=[29230], 99.95th=[29492], 00:37:42.529 | 99.99th=[30802] 00:37:42.529 write: IOPS=4017, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1009msec); 0 zone resets 00:37:42.529 slat (nsec): min=1547, max=12582k, avg=138064.23, stdev=840498.20 00:37:42.529 clat (usec): min=587, max=79315, avg=19292.63, stdev=18348.24 00:37:42.529 lat (usec): min=621, max=79324, avg=19430.70, stdev=18478.83 00:37:42.529 clat percentiles (usec): 00:37:42.529 | 1.00th=[ 2024], 5.00th=[ 4015], 10.00th=[ 5735], 20.00th=[ 8094], 00:37:42.529 | 30.00th=[ 9765], 40.00th=[11469], 50.00th=[12256], 60.00th=[13698], 00:37:42.529 | 70.00th=[15401], 80.00th=[22938], 90.00th=[51643], 95.00th=[64226], 00:37:42.529 | 99.00th=[73925], 99.50th=[76022], 99.90th=[79168], 99.95th=[79168], 00:37:42.529 | 99.99th=[79168] 00:37:42.529 bw ( KiB/s): min=14832, max=16576, per=15.75%, avg=15704.00, stdev=1233.19, samples=2 00:37:42.529 iops : min= 3708, max= 4144, avg=3926.00, stdev=308.30, samples=2 00:37:42.529 lat (usec) : 750=0.05%, 1000=0.04% 00:37:42.529 lat (msec) : 2=0.33%, 4=2.46%, 10=25.75%, 20=56.35%, 50=9.03% 00:37:42.529 lat (msec) : 100=5.98% 00:37:42.529 cpu : usr=2.88%, sys=4.46%, ctx=286, majf=0, minf=1 00:37:42.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:37:42.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:42.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:42.529 issued rwts: total=3584,4054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:42.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:42.529 00:37:42.529 Run status group 0 (all jobs): 00:37:42.529 READ: bw=92.4MiB/s (96.9MB/s), 13.9MiB/s-29.2MiB/s (14.5MB/s-30.6MB/s), io=93.3MiB (97.8MB), run=1003-1009msec 00:37:42.529 WRITE: bw=97.4MiB/s (102MB/s), 15.7MiB/s-29.9MiB/s (16.5MB/s-31.3MB/s), io=98.2MiB (103MB), run=1003-1009msec 00:37:42.529 00:37:42.529 Disk stats (read/write): 00:37:42.529 nvme0n1: ios=6320/6656, merge=0/0, ticks=13209/13124, in_queue=26333, util=95.49% 00:37:42.529 nvme0n2: ios=4133/4602, merge=0/0, ticks=47616/53751, in_queue=101367, util=87.72% 00:37:42.529 nvme0n3: ios=6392/6656, merge=0/0, ticks=52105/47367, in_queue=99472, util=96.08% 00:37:42.529 nvme0n4: ios=3222/3584, merge=0/0, ticks=43692/57881, in_queue=101573, util=89.47% 00:37:42.529 11:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:37:42.529 11:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1298609 00:37:42.529 11:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:37:42.530 11:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:37:42.530 [global] 00:37:42.530 thread=1 00:37:42.530 invalidate=1 00:37:42.530 rw=read 00:37:42.530 time_based=1 00:37:42.530 runtime=10 00:37:42.530 ioengine=libaio 00:37:42.530 direct=1 00:37:42.530 bs=4096 00:37:42.530 iodepth=1 00:37:42.530 norandommap=1 00:37:42.530 numjobs=1 00:37:42.530 00:37:42.530 [job0] 00:37:42.530 filename=/dev/nvme0n1 00:37:42.530 [job1] 00:37:42.530 filename=/dev/nvme0n2 00:37:42.530 [job2] 00:37:42.530 filename=/dev/nvme0n3 00:37:42.530 [job3] 00:37:42.530 filename=/dev/nvme0n4 00:37:42.530 Could not set queue depth (nvme0n1) 00:37:42.530 Could not set queue depth (nvme0n2) 00:37:42.530 Could not set queue depth (nvme0n3) 00:37:42.530 Could not set queue depth (nvme0n4) 00:37:42.791 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:42.791 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:42.791 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:42.791 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:42.791 fio-3.35 00:37:42.791 Starting 4 threads 00:37:45.337 11:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:37:45.599 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=9973760, buflen=4096 00:37:45.599 fio: pid=1298902, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:45.599 11:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:37:45.860 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=16908288, buflen=4096 00:37:45.860 fio: pid=1298896, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:45.860 11:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:45.860 11:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:37:45.860 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=294912, buflen=4096 00:37:45.860 fio: pid=1298870, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:37:45.860 11:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:45.860 11:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:37:46.122 11:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:46.122 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=311296, buflen=4096 00:37:46.122 fio: pid=1298880, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:46.122 11:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:37:46.122 00:37:46.122 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1298870: Tue Nov 19 11:05:25 2024 00:37:46.122 read: IOPS=24, BW=97.6KiB/s (99.9kB/s)(288KiB/2951msec) 00:37:46.122 slat (usec): min=26, max=7249, avg=127.98, stdev=845.12 00:37:46.122 clat (usec): min=936, max=43956, avg=40848.98, stdev=4802.93 00:37:46.122 lat (usec): min=1009, max=43988, avg=40878.04, stdev=4797.70 00:37:46.122 clat percentiles (usec): 00:37:46.122 | 1.00th=[ 938], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:37:46.122 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:37:46.122 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:37:46.122 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:37:46.122 | 99.99th=[43779] 00:37:46.122 bw ( KiB/s): min= 88, max= 104, per=1.13%, avg=97.60, stdev= 6.69, samples=5 00:37:46.122 iops : min= 22, max= 26, avg=24.40, stdev= 1.67, samples=5 00:37:46.122 lat (usec) : 1000=1.37% 00:37:46.122 lat (msec) : 50=97.26% 00:37:46.122 cpu : usr=0.00%, sys=0.37%, ctx=74, majf=0, minf=1 00:37:46.122 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:46.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.122 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.122 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.122 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:46.122 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1298880: Tue Nov 19 11:05:25 2024 00:37:46.122 read: IOPS=24, BW=97.0KiB/s (99.4kB/s)(304KiB/3133msec) 00:37:46.122 slat (usec): min=25, max=18802, avg=432.44, stdev=2456.55 00:37:46.122 clat (usec): min=708, max=42243, avg=40486.12, stdev=6560.35 00:37:46.122 lat (usec): min=734, max=60036, avg=40923.89, stdev=7056.85 00:37:46.122 clat percentiles (usec): 00:37:46.122 | 1.00th=[ 709], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:46.122 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:37:46.122 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:46.122 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:46.122 | 99.99th=[42206] 00:37:46.122 bw ( KiB/s): min= 89, max= 104, per=1.13%, avg=97.50, stdev= 5.72, samples=6 00:37:46.122 iops : min= 22, max= 26, avg=24.33, stdev= 1.51, samples=6 00:37:46.122 lat (usec) : 750=1.30% 00:37:46.122 lat (msec) : 2=1.30%, 50=96.10% 00:37:46.122 cpu : usr=0.16%, sys=0.00%, ctx=80, majf=0, minf=2 00:37:46.122 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:46.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.122 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.122 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.122 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:46.122 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1298896: Tue Nov 19 11:05:25 2024 00:37:46.122 read: IOPS=1493, BW=5974KiB/s (6117kB/s)(16.1MiB/2764msec) 00:37:46.122 slat (usec): min=6, max=11143, avg=29.35, stdev=235.20 00:37:46.122 clat (usec): min=221, max=4082, avg=629.73, stdev=135.83 00:37:46.122 lat (usec): min=229, max=11891, avg=659.08, stdev=274.16 00:37:46.122 clat percentiles (usec): 00:37:46.122 | 1.00th=[ 416], 5.00th=[ 465], 10.00th=[ 494], 20.00th=[ 537], 00:37:46.122 | 30.00th=[ 562], 40.00th=[ 578], 50.00th=[ 619], 60.00th=[ 652], 00:37:46.122 | 70.00th=[ 693], 80.00th=[ 742], 90.00th=[ 775], 95.00th=[ 799], 00:37:46.122 | 99.00th=[ 840], 99.50th=[ 857], 99.90th=[ 947], 99.95th=[ 3294], 00:37:46.122 | 99.99th=[ 4080] 00:37:46.122 bw ( KiB/s): min= 5592, max= 7016, per=71.31%, avg=6110.40, stdev=655.59, samples=5 00:37:46.122 iops : min= 1398, max= 1754, avg=1527.60, stdev=163.90, samples=5 00:37:46.122 lat (usec) : 250=0.07%, 500=10.61%, 750=72.63%, 1000=16.59% 00:37:46.122 lat (msec) : 4=0.05%, 10=0.02% 00:37:46.122 cpu : usr=1.88%, sys=5.68%, ctx=4132, majf=0, minf=2 00:37:46.122 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:46.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.122 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.122 issued rwts: total=4129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.122 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:46.122 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1298902: Tue Nov 19 11:05:25 2024 00:37:46.122 read: IOPS=947, BW=3787KiB/s (3878kB/s)(9740KiB/2572msec) 00:37:46.122 slat (nsec): min=6908, max=62104, avg=24642.17, stdev=5863.34 00:37:46.122 clat (usec): min=269, max=42122, avg=1021.60, stdev=2987.56 00:37:46.122 lat (usec): min=277, max=42147, avg=1046.25, stdev=2987.68 00:37:46.122 clat percentiles (usec): 00:37:46.122 | 1.00th=[ 486], 5.00th=[ 562], 10.00th=[ 619], 20.00th=[ 676], 00:37:46.122 | 30.00th=[ 725], 40.00th=[ 775], 50.00th=[ 816], 60.00th=[ 848], 00:37:46.122 | 70.00th=[ 881], 80.00th=[ 914], 90.00th=[ 963], 95.00th=[ 1037], 00:37:46.122 | 99.00th=[ 1237], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:37:46.122 | 99.99th=[42206] 00:37:46.122 bw ( KiB/s): min= 976, max= 5152, per=44.20%, avg=3787.20, stdev=1650.44, samples=5 00:37:46.122 iops : min= 244, max= 1288, avg=946.80, stdev=412.61, samples=5 00:37:46.122 lat (usec) : 500=1.52%, 750=33.17%, 1000=58.50% 00:37:46.122 lat (msec) : 2=6.24%, 50=0.53% 00:37:46.122 cpu : usr=0.82%, sys=2.88%, ctx=2436, majf=0, minf=2 00:37:46.122 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:46.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.122 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.122 issued rwts: total=2436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.122 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:46.122 00:37:46.122 Run status group 0 (all jobs): 00:37:46.122 READ: bw=8568KiB/s (8774kB/s), 97.0KiB/s-5974KiB/s (99.4kB/s-6117kB/s), io=26.2MiB (27.5MB), run=2572-3133msec 00:37:46.122 00:37:46.122 Disk stats (read/write): 00:37:46.122 nvme0n1: ios=69/0, merge=0/0, ticks=2821/0, in_queue=2821, util=94.76% 00:37:46.122 nvme0n2: ios=75/0, merge=0/0, ticks=3038/0, in_queue=3038, util=94.76% 00:37:46.122 nvme0n3: ios=3937/0, merge=0/0, ticks=2139/0, in_queue=2139, util=96.03% 00:37:46.122 nvme0n4: ios=2182/0, merge=0/0, ticks=2227/0, in_queue=2227, util=96.06% 00:37:46.383 11:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:46.383 11:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:37:46.664 11:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:46.664 11:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:37:46.664 11:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:46.664 11:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:37:46.925 11:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:46.925 11:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:37:47.185 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:37:47.185 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1298609 00:37:47.185 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:37:47.185 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:47.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:47.185 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:47.185 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:37:47.185 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:37:47.185 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:47.185 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:37:47.185 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:47.185 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:37:47.185 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:37:47.185 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:37:47.185 nvmf hotplug test: fio failed as expected 00:37:47.185 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:47.446 rmmod nvme_tcp 00:37:47.446 rmmod nvme_fabrics 00:37:47.446 rmmod nvme_keyring 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1295437 ']' 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1295437 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1295437 ']' 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1295437 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1295437 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1295437' 00:37:47.446 killing process with pid 1295437 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1295437 00:37:47.446 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1295437 00:37:47.707 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:47.707 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:47.707 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:47.707 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:37:47.707 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:37:47.707 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:47.707 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:37:47.707 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:47.707 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:47.707 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.707 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:47.707 11:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:49.622 11:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:49.622 00:37:49.622 real 0m28.234s 00:37:49.622 user 2m16.806s 00:37:49.622 sys 0m12.128s 00:37:49.622 11:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:49.622 11:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:49.622 ************************************ 00:37:49.622 END TEST nvmf_fio_target 00:37:49.622 ************************************ 00:37:49.884 11:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:37:49.884 11:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:49.884 11:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:49.884 11:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:49.884 ************************************ 00:37:49.884 START TEST nvmf_bdevio 00:37:49.884 ************************************ 00:37:49.884 11:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:37:49.884 * Looking for test storage... 00:37:49.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:49.884 11:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:49.884 11:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:37:49.884 11:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:49.884 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:49.884 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:49.884 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:49.884 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:49.884 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:37:49.884 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:37:49.884 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:37:49.884 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:37:49.884 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:49.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.885 --rc genhtml_branch_coverage=1 00:37:49.885 --rc genhtml_function_coverage=1 00:37:49.885 --rc genhtml_legend=1 00:37:49.885 --rc geninfo_all_blocks=1 00:37:49.885 --rc geninfo_unexecuted_blocks=1 00:37:49.885 00:37:49.885 ' 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:49.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.885 --rc genhtml_branch_coverage=1 00:37:49.885 --rc genhtml_function_coverage=1 00:37:49.885 --rc genhtml_legend=1 00:37:49.885 --rc geninfo_all_blocks=1 00:37:49.885 --rc geninfo_unexecuted_blocks=1 00:37:49.885 00:37:49.885 ' 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:49.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.885 --rc genhtml_branch_coverage=1 00:37:49.885 --rc genhtml_function_coverage=1 00:37:49.885 --rc genhtml_legend=1 00:37:49.885 --rc geninfo_all_blocks=1 00:37:49.885 --rc geninfo_unexecuted_blocks=1 00:37:49.885 00:37:49.885 ' 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:49.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.885 --rc genhtml_branch_coverage=1 00:37:49.885 --rc genhtml_function_coverage=1 00:37:49.885 --rc genhtml_legend=1 00:37:49.885 --rc geninfo_all_blocks=1 00:37:49.885 --rc geninfo_unexecuted_blocks=1 00:37:49.885 00:37:49.885 ' 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:49.885 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:37:50.145 11:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:58.289 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:58.290 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:58.290 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:58.290 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:58.290 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:58.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:58.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:37:58.290 00:37:58.290 --- 10.0.0.2 ping statistics --- 00:37:58.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:58.290 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:58.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:58.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:37:58.290 00:37:58.290 --- 10.0.0.1 ping statistics --- 00:37:58.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:58.290 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1303886 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1303886 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1303886 ']' 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:58.290 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:58.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:58.291 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:58.291 11:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:58.291 [2024-11-19 11:05:36.485034] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:58.291 [2024-11-19 11:05:36.486011] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:37:58.291 [2024-11-19 11:05:36.486048] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:58.291 [2024-11-19 11:05:36.579115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:58.291 [2024-11-19 11:05:36.615844] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:58.291 [2024-11-19 11:05:36.615875] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:58.291 [2024-11-19 11:05:36.615884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:58.291 [2024-11-19 11:05:36.615890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:58.291 [2024-11-19 11:05:36.615896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:58.291 [2024-11-19 11:05:36.617392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:58.291 [2024-11-19 11:05:36.617625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:58.291 [2024-11-19 11:05:36.617744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:58.291 [2024-11-19 11:05:36.617744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:58.291 [2024-11-19 11:05:36.674916] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:58.291 [2024-11-19 11:05:36.676127] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:58.291 [2024-11-19 11:05:36.676670] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:58.291 [2024-11-19 11:05:36.677221] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:58.291 [2024-11-19 11:05:36.677273] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:58.291 [2024-11-19 11:05:37.326525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:58.291 Malloc0 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:58.291 [2024-11-19 11:05:37.410736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:58.291 { 00:37:58.291 "params": { 00:37:58.291 "name": "Nvme$subsystem", 00:37:58.291 "trtype": "$TEST_TRANSPORT", 00:37:58.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:58.291 "adrfam": "ipv4", 00:37:58.291 "trsvcid": "$NVMF_PORT", 00:37:58.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:58.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:58.291 "hdgst": ${hdgst:-false}, 00:37:58.291 "ddgst": ${ddgst:-false} 00:37:58.291 }, 00:37:58.291 "method": "bdev_nvme_attach_controller" 00:37:58.291 } 00:37:58.291 EOF 00:37:58.291 )") 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:37:58.291 11:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:58.291 "params": { 00:37:58.291 "name": "Nvme1", 00:37:58.291 "trtype": "tcp", 00:37:58.291 "traddr": "10.0.0.2", 00:37:58.291 "adrfam": "ipv4", 00:37:58.291 "trsvcid": "4420", 00:37:58.291 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:58.291 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:58.291 "hdgst": false, 00:37:58.291 "ddgst": false 00:37:58.291 }, 00:37:58.291 "method": "bdev_nvme_attach_controller" 00:37:58.291 }' 00:37:58.291 [2024-11-19 11:05:37.468235] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:37:58.291 [2024-11-19 11:05:37.468307] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304170 ] 00:37:58.553 [2024-11-19 11:05:37.560505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:58.553 [2024-11-19 11:05:37.637222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:58.553 [2024-11-19 11:05:37.637290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:58.553 [2024-11-19 11:05:37.637290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:58.815 I/O targets: 00:37:58.815 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:37:58.815 00:37:58.815 00:37:58.815 CUnit - A unit testing framework for C - Version 2.1-3 00:37:58.815 http://cunit.sourceforge.net/ 00:37:58.815 00:37:58.815 00:37:58.815 Suite: bdevio tests on: Nvme1n1 00:37:59.076 Test: blockdev write read block ...passed 00:37:59.076 Test: blockdev write zeroes read block ...passed 00:37:59.076 Test: blockdev write zeroes read no split ...passed 00:37:59.076 Test: blockdev write zeroes read split ...passed 00:37:59.076 Test: blockdev write zeroes read split partial ...passed 00:37:59.076 Test: blockdev reset ...[2024-11-19 11:05:38.089318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:37:59.076 [2024-11-19 11:05:38.089382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x943970 (9): Bad file descriptor 00:37:59.076 [2024-11-19 11:05:38.135903] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:37:59.076 passed 00:37:59.076 Test: blockdev write read 8 blocks ...passed 00:37:59.077 Test: blockdev write read size > 128k ...passed 00:37:59.077 Test: blockdev write read invalid size ...passed 00:37:59.077 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:59.077 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:59.077 Test: blockdev write read max offset ...passed 00:37:59.337 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:59.337 Test: blockdev writev readv 8 blocks ...passed 00:37:59.337 Test: blockdev writev readv 30 x 1block ...passed 00:37:59.337 Test: blockdev writev readv block ...passed 00:37:59.337 Test: blockdev writev readv size > 128k ...passed 00:37:59.337 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:59.337 Test: blockdev comparev and writev ...[2024-11-19 11:05:38.355654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:59.337 [2024-11-19 11:05:38.355686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:59.337 [2024-11-19 11:05:38.355702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:59.337 [2024-11-19 11:05:38.355711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:59.337 [2024-11-19 11:05:38.356131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:59.337 [2024-11-19 11:05:38.356143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:59.337 [2024-11-19 11:05:38.356162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:59.337 [2024-11-19 11:05:38.356170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:59.337 [2024-11-19 11:05:38.356562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:59.337 [2024-11-19 11:05:38.356573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:59.337 [2024-11-19 11:05:38.356586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:59.337 [2024-11-19 11:05:38.356597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:59.337 [2024-11-19 11:05:38.356984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:59.337 [2024-11-19 11:05:38.356995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:59.337 [2024-11-19 11:05:38.357009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:59.337 [2024-11-19 11:05:38.357017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:59.337 passed 00:37:59.337 Test: blockdev nvme passthru rw ...passed 00:37:59.337 Test: blockdev nvme passthru vendor specific ...[2024-11-19 11:05:38.440612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:59.338 [2024-11-19 11:05:38.440625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:59.338 [2024-11-19 11:05:38.440845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:59.338 [2024-11-19 11:05:38.440856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:59.338 [2024-11-19 11:05:38.441066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:59.338 [2024-11-19 11:05:38.441077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:59.338 [2024-11-19 11:05:38.441282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:59.338 [2024-11-19 11:05:38.441292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:59.338 passed 00:37:59.338 Test: blockdev nvme admin passthru ...passed 00:37:59.338 Test: blockdev copy ...passed 00:37:59.338 00:37:59.338 Run Summary: Type Total Ran Passed Failed Inactive 00:37:59.338 suites 1 1 n/a 0 0 00:37:59.338 tests 23 23 23 0 0 00:37:59.338 asserts 152 152 152 0 n/a 00:37:59.338 00:37:59.338 Elapsed time = 1.081 seconds 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:59.599 rmmod nvme_tcp 00:37:59.599 rmmod nvme_fabrics 00:37:59.599 rmmod nvme_keyring 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1303886 ']' 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1303886 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1303886 ']' 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1303886 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1303886 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1303886' 00:37:59.599 killing process with pid 1303886 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1303886 00:37:59.599 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1303886 00:37:59.859 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:59.859 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:59.859 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:59.859 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:37:59.859 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:37:59.859 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:59.859 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:37:59.859 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:59.859 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:59.859 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:59.859 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:59.859 11:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:02.407 11:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:02.407 00:38:02.407 real 0m12.109s 00:38:02.407 user 0m10.060s 00:38:02.407 sys 0m6.336s 00:38:02.407 11:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:02.407 11:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:02.407 ************************************ 00:38:02.407 END TEST nvmf_bdevio 00:38:02.407 ************************************ 00:38:02.407 11:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:38:02.407 00:38:02.407 real 4m59.875s 00:38:02.407 user 10m13.418s 00:38:02.407 sys 2m2.912s 00:38:02.407 11:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:02.407 11:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:02.407 ************************************ 00:38:02.407 END TEST nvmf_target_core_interrupt_mode 00:38:02.407 ************************************ 00:38:02.407 11:05:41 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:38:02.407 11:05:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:02.407 11:05:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:02.407 11:05:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:02.407 ************************************ 00:38:02.407 START TEST nvmf_interrupt 00:38:02.407 ************************************ 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:38:02.407 * Looking for test storage... 00:38:02.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:02.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.407 --rc genhtml_branch_coverage=1 00:38:02.407 --rc genhtml_function_coverage=1 00:38:02.407 --rc genhtml_legend=1 00:38:02.407 --rc geninfo_all_blocks=1 00:38:02.407 --rc geninfo_unexecuted_blocks=1 00:38:02.407 00:38:02.407 ' 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:02.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.407 --rc genhtml_branch_coverage=1 00:38:02.407 --rc genhtml_function_coverage=1 00:38:02.407 --rc genhtml_legend=1 00:38:02.407 --rc geninfo_all_blocks=1 00:38:02.407 --rc geninfo_unexecuted_blocks=1 00:38:02.407 00:38:02.407 ' 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:02.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.407 --rc genhtml_branch_coverage=1 00:38:02.407 --rc genhtml_function_coverage=1 00:38:02.407 --rc genhtml_legend=1 00:38:02.407 --rc geninfo_all_blocks=1 00:38:02.407 --rc geninfo_unexecuted_blocks=1 00:38:02.407 00:38:02.407 ' 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:02.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.407 --rc genhtml_branch_coverage=1 00:38:02.407 --rc genhtml_function_coverage=1 00:38:02.407 --rc genhtml_legend=1 00:38:02.407 --rc geninfo_all_blocks=1 00:38:02.407 --rc geninfo_unexecuted_blocks=1 00:38:02.407 00:38:02.407 ' 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:02.407 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:38:02.408 11:05:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:10.552 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:10.552 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:38:10.552 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:10.552 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:10.552 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:10.552 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:10.552 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:10.552 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:38:10.552 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:10.552 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:38:10.552 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:38:10.552 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:38:10.552 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:10.553 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:10.553 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:10.553 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:10.553 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:10.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:10.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:38:10.553 00:38:10.553 --- 10.0.0.2 ping statistics --- 00:38:10.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:10.553 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:10.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:10.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:38:10.553 00:38:10.553 --- 10.0.0.1 ping statistics --- 00:38:10.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:10.553 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1308517 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1308517 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1308517 ']' 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:10.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:10.553 11:05:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:10.553 [2024-11-19 11:05:48.802490] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:10.554 [2024-11-19 11:05:48.803628] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:38:10.554 [2024-11-19 11:05:48.803682] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:10.554 [2024-11-19 11:05:48.905321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:10.554 [2024-11-19 11:05:48.956328] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:10.554 [2024-11-19 11:05:48.956376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:10.554 [2024-11-19 11:05:48.956384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:10.554 [2024-11-19 11:05:48.956391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:10.554 [2024-11-19 11:05:48.956398] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:10.554 [2024-11-19 11:05:48.958196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:10.554 [2024-11-19 11:05:48.958268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:10.554 [2024-11-19 11:05:49.034956] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:10.554 [2024-11-19 11:05:49.035475] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:10.554 [2024-11-19 11:05:49.035849] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:38:10.554 5000+0 records in 00:38:10.554 5000+0 records out 00:38:10.554 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0197073 s, 520 MB/s 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:10.554 AIO0 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:10.554 [2024-11-19 11:05:49.731259] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.554 11:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:10.814 11:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.814 11:05:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:38:10.814 11:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.814 11:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:10.814 11:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.814 11:05:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:10.814 11:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.814 11:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:10.814 [2024-11-19 11:05:49.775718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:10.814 11:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.814 11:05:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:38:10.814 11:05:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1308517 0 00:38:10.814 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1308517 0 idle 00:38:10.814 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1308517 00:38:10.814 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:10.814 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:10.814 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:10.814 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1308517 -w 256 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1308517 root 20 0 128.2g 42624 32256 S 6.2 0.0 0:00.33 reactor_0' 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1308517 root 20 0 128.2g 42624 32256 S 6.2 0.0 0:00.33 reactor_0 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1308517 1 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1308517 1 idle 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1308517 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1308517 -w 256 00:38:10.815 11:05:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:11.076 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1308521 root 20 0 128.2g 42624 32256 S 0.0 0.0 0:00.00 reactor_1' 00:38:11.076 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1308521 root 20 0 128.2g 42624 32256 S 0.0 0.0 0:00.00 reactor_1 00:38:11.076 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:11.076 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:11.076 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:11.076 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:11.076 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:11.076 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:11.076 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:11.076 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:11.076 11:05:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:38:11.076 11:05:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1308889 00:38:11.076 11:05:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:38:11.076 11:05:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:11.076 11:05:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:38:11.076 11:05:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1308517 0 00:38:11.077 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1308517 0 busy 00:38:11.077 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1308517 00:38:11.077 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:11.077 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:38:11.077 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:38:11.077 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:11.077 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:38:11.077 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:11.077 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:11.077 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:11.077 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1308517 -w 256 00:38:11.077 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:11.338 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1308517 root 20 0 128.2g 42624 32256 S 0.0 0.0 0:00.33 reactor_0' 00:38:11.338 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1308517 root 20 0 128.2g 42624 32256 S 0.0 0.0 0:00.33 reactor_0 00:38:11.338 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:11.338 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:11.338 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:11.338 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:11.338 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:38:11.338 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:38:11.338 11:05:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1308517 -w 256 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1308517 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:02.53 reactor_0' 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1308517 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:02.53 reactor_0 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1308517 1 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1308517 1 busy 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1308517 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1308517 -w 256 00:38:12.392 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:12.660 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1308521 root 20 0 128.2g 43776 32256 R 93.8 0.0 0:01.29 reactor_1' 00:38:12.660 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1308521 root 20 0 128.2g 43776 32256 R 93.8 0.0 0:01.29 reactor_1 00:38:12.660 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:12.660 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:12.660 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:38:12.660 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:38:12.660 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:38:12.660 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:38:12.660 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:38:12.660 11:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:12.660 11:05:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1308889 00:38:22.662 Initializing NVMe Controllers 00:38:22.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:22.662 Controller IO queue size 256, less than required. 00:38:22.662 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:22.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:22.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:22.662 Initialization complete. Launching workers. 00:38:22.662 ======================================================== 00:38:22.662 Latency(us) 00:38:22.662 Device Information : IOPS MiB/s Average min max 00:38:22.662 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18859.97 73.67 13578.38 5244.95 32899.04 00:38:22.662 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19763.47 77.20 12954.41 8213.18 30107.12 00:38:22.662 ======================================================== 00:38:22.662 Total : 38623.44 150.87 13259.09 5244.95 32899.04 00:38:22.662 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1308517 0 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1308517 0 idle 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1308517 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1308517 -w 256 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1308517 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:20.31 reactor_0' 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1308517 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:20.31 reactor_0 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1308517 1 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1308517 1 idle 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1308517 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1308517 -w 256 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1308521 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.00 reactor_1' 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1308521 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.00 reactor_1 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:22.662 11:06:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:22.662 11:06:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:38:22.662 11:06:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:38:22.662 11:06:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:38:22.662 11:06:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:38:22.662 11:06:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1308517 0 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1308517 0 idle 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1308517 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1308517 -w 256 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1308517 root 20 0 128.2g 78336 32256 S 6.2 0.1 0:20.71 reactor_0' 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1308517 root 20 0 128.2g 78336 32256 S 6.2 0.1 0:20.71 reactor_0 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1308517 1 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1308517 1 idle 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1308517 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1308517 -w 256 00:38:24.577 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:24.838 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1308521 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.14 reactor_1' 00:38:24.838 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1308521 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.14 reactor_1 00:38:24.838 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:24.838 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:24.838 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:24.838 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:24.838 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:24.838 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:24.838 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:24.838 11:06:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:24.838 11:06:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:25.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:25.099 11:06:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:25.099 11:06:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:38:25.099 11:06:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:38:25.099 11:06:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:25.099 11:06:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:38:25.099 11:06:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:25.099 11:06:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:38:25.099 11:06:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:38:25.099 11:06:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:25.100 rmmod nvme_tcp 00:38:25.100 rmmod nvme_fabrics 00:38:25.100 rmmod nvme_keyring 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1308517 ']' 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1308517 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1308517 ']' 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1308517 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1308517 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1308517' 00:38:25.100 killing process with pid 1308517 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1308517 00:38:25.100 11:06:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1308517 00:38:25.360 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:25.360 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:25.360 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:25.360 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:38:25.360 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:38:25.360 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:25.360 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:38:25.360 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:25.360 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:25.360 11:06:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.360 11:06:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:25.360 11:06:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:27.905 11:06:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:27.905 00:38:27.905 real 0m25.414s 00:38:27.905 user 0m40.510s 00:38:27.905 sys 0m9.715s 00:38:27.905 11:06:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:27.905 11:06:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:27.905 ************************************ 00:38:27.905 END TEST nvmf_interrupt 00:38:27.905 ************************************ 00:38:27.905 00:38:27.905 real 30m10.292s 00:38:27.905 user 61m42.379s 00:38:27.905 sys 10m17.143s 00:38:27.905 11:06:06 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:27.905 11:06:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:27.905 ************************************ 00:38:27.905 END TEST nvmf_tcp 00:38:27.906 ************************************ 00:38:27.906 11:06:06 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:38:27.906 11:06:06 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:27.906 11:06:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:27.906 11:06:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:27.906 11:06:06 -- common/autotest_common.sh@10 -- # set +x 00:38:27.906 ************************************ 00:38:27.906 START TEST spdkcli_nvmf_tcp 00:38:27.906 ************************************ 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:27.906 * Looking for test storage... 00:38:27.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:27.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.906 --rc genhtml_branch_coverage=1 00:38:27.906 --rc genhtml_function_coverage=1 00:38:27.906 --rc genhtml_legend=1 00:38:27.906 --rc geninfo_all_blocks=1 00:38:27.906 --rc geninfo_unexecuted_blocks=1 00:38:27.906 00:38:27.906 ' 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:27.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.906 --rc genhtml_branch_coverage=1 00:38:27.906 --rc genhtml_function_coverage=1 00:38:27.906 --rc genhtml_legend=1 00:38:27.906 --rc geninfo_all_blocks=1 00:38:27.906 --rc geninfo_unexecuted_blocks=1 00:38:27.906 00:38:27.906 ' 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:27.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.906 --rc genhtml_branch_coverage=1 00:38:27.906 --rc genhtml_function_coverage=1 00:38:27.906 --rc genhtml_legend=1 00:38:27.906 --rc geninfo_all_blocks=1 00:38:27.906 --rc geninfo_unexecuted_blocks=1 00:38:27.906 00:38:27.906 ' 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:27.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.906 --rc genhtml_branch_coverage=1 00:38:27.906 --rc genhtml_function_coverage=1 00:38:27.906 --rc genhtml_legend=1 00:38:27.906 --rc geninfo_all_blocks=1 00:38:27.906 --rc geninfo_unexecuted_blocks=1 00:38:27.906 00:38:27.906 ' 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:27.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:38:27.906 11:06:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1312184 00:38:27.907 11:06:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1312184 00:38:27.907 11:06:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1312184 ']' 00:38:27.907 11:06:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:27.907 11:06:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:38:27.907 11:06:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:27.907 11:06:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:27.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:27.907 11:06:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:27.907 11:06:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:27.907 [2024-11-19 11:06:06.951041] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:38:27.907 [2024-11-19 11:06:06.951113] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1312184 ] 00:38:27.907 [2024-11-19 11:06:07.045304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:27.907 [2024-11-19 11:06:07.098977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:27.907 [2024-11-19 11:06:07.098981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:28.848 11:06:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:28.848 11:06:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:38:28.849 11:06:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:38:28.849 11:06:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:28.849 11:06:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:28.849 11:06:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:38:28.849 11:06:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:38:28.849 11:06:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:38:28.849 11:06:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:28.849 11:06:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:28.849 11:06:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:38:28.849 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:38:28.849 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:38:28.849 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:38:28.849 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:38:28.849 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:38:28.849 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:38:28.849 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:28.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:38:28.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:38:28.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:28.849 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:28.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:38:28.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:28.849 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:28.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:38:28.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:28.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:28.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:28.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:28.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:38:28.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:38:28.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:28.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:38:28.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:28.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:38:28.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:38:28.849 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:38:28.849 ' 00:38:31.395 [2024-11-19 11:06:10.499759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:32.780 [2024-11-19 11:06:11.859939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:38:35.324 [2024-11-19 11:06:14.382984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:38:37.869 [2024-11-19 11:06:16.605323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:38:39.255 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:38:39.255 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:38:39.255 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:38:39.255 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:38:39.255 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:38:39.255 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:38:39.255 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:38:39.255 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:39.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:38:39.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:38:39.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:39.255 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:39.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:38:39.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:39.255 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:39.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:38:39.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:39.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:39.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:39.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:39.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:38:39.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:38:39.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:39.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:38:39.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:39.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:38:39.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:38:39.255 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:38:39.255 11:06:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:38:39.255 11:06:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:39.255 11:06:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:39.255 11:06:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:38:39.255 11:06:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:39.255 11:06:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:39.255 11:06:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:38:39.255 11:06:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:38:39.828 11:06:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:38:39.828 11:06:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:38:39.828 11:06:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:38:39.828 11:06:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:39.828 11:06:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:39.828 11:06:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:38:39.828 11:06:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:39.828 11:06:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:39.828 11:06:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:38:39.828 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:38:39.828 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:39.828 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:38:39.828 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:38:39.828 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:38:39.828 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:38:39.828 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:39.828 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:38:39.828 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:38:39.828 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:38:39.828 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:38:39.828 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:38:39.828 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:38:39.828 ' 00:38:46.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:38:46.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:38:46.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:46.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:38:46.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:38:46.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:38:46.414 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:38:46.414 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:46.414 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:38:46.414 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:38:46.414 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:38:46.414 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:38:46.414 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:38:46.414 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:38:46.414 11:06:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:38:46.414 11:06:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:46.414 11:06:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:46.414 11:06:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1312184 00:38:46.414 11:06:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1312184 ']' 00:38:46.414 11:06:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1312184 00:38:46.414 11:06:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:38:46.414 11:06:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:46.414 11:06:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1312184 00:38:46.414 11:06:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:46.414 11:06:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:46.414 11:06:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1312184' 00:38:46.414 killing process with pid 1312184 00:38:46.415 11:06:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1312184 00:38:46.415 11:06:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1312184 00:38:46.415 11:06:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:38:46.415 11:06:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:38:46.415 11:06:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1312184 ']' 00:38:46.415 11:06:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1312184 00:38:46.415 11:06:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1312184 ']' 00:38:46.415 11:06:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1312184 00:38:46.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1312184) - No such process 00:38:46.415 11:06:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1312184 is not found' 00:38:46.415 Process with pid 1312184 is not found 00:38:46.415 11:06:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:38:46.415 11:06:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:38:46.415 11:06:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:38:46.415 00:38:46.415 real 0m18.123s 00:38:46.415 user 0m40.188s 00:38:46.415 sys 0m0.891s 00:38:46.415 11:06:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:46.415 11:06:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:46.415 ************************************ 00:38:46.415 END TEST spdkcli_nvmf_tcp 00:38:46.415 ************************************ 00:38:46.415 11:06:24 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:46.415 11:06:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:46.415 11:06:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:46.415 11:06:24 -- common/autotest_common.sh@10 -- # set +x 00:38:46.415 ************************************ 00:38:46.415 START TEST nvmf_identify_passthru 00:38:46.415 ************************************ 00:38:46.415 11:06:24 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:46.415 * Looking for test storage... 00:38:46.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:46.415 11:06:24 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:46.415 11:06:24 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:38:46.415 11:06:24 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:46.415 11:06:25 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:38:46.415 11:06:25 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:46.415 11:06:25 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:46.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.415 --rc genhtml_branch_coverage=1 00:38:46.415 --rc genhtml_function_coverage=1 00:38:46.415 --rc genhtml_legend=1 00:38:46.415 --rc geninfo_all_blocks=1 00:38:46.415 --rc geninfo_unexecuted_blocks=1 00:38:46.415 00:38:46.415 ' 00:38:46.415 11:06:25 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:46.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.415 --rc genhtml_branch_coverage=1 00:38:46.415 --rc genhtml_function_coverage=1 00:38:46.415 --rc genhtml_legend=1 00:38:46.415 --rc geninfo_all_blocks=1 00:38:46.415 --rc geninfo_unexecuted_blocks=1 00:38:46.415 00:38:46.415 ' 00:38:46.415 11:06:25 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:46.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.415 --rc genhtml_branch_coverage=1 00:38:46.415 --rc genhtml_function_coverage=1 00:38:46.415 --rc genhtml_legend=1 00:38:46.415 --rc geninfo_all_blocks=1 00:38:46.415 --rc geninfo_unexecuted_blocks=1 00:38:46.415 00:38:46.415 ' 00:38:46.415 11:06:25 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:46.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.415 --rc genhtml_branch_coverage=1 00:38:46.415 --rc genhtml_function_coverage=1 00:38:46.415 --rc genhtml_legend=1 00:38:46.415 --rc geninfo_all_blocks=1 00:38:46.415 --rc geninfo_unexecuted_blocks=1 00:38:46.415 00:38:46.415 ' 00:38:46.415 11:06:25 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:46.415 11:06:25 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:46.415 11:06:25 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.415 11:06:25 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.415 11:06:25 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.415 11:06:25 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:46.415 11:06:25 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:38:46.415 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:46.416 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:46.416 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:46.416 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:46.416 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:46.416 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:46.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:46.416 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:46.416 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:46.416 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:46.416 11:06:25 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:46.416 11:06:25 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:38:46.416 11:06:25 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:46.416 11:06:25 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:46.416 11:06:25 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:46.416 11:06:25 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.416 11:06:25 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.416 11:06:25 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.416 11:06:25 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:46.416 11:06:25 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.416 11:06:25 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:38:46.416 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:46.416 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:46.416 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:46.416 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:46.416 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:46.416 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:46.416 11:06:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:46.416 11:06:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:46.416 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:46.416 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:46.416 11:06:25 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:38:46.416 11:06:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:52.995 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:52.995 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:52.995 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:52.995 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:52.996 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:52.996 11:06:31 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:52.996 11:06:32 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:52.996 11:06:32 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:52.996 11:06:32 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:52.996 11:06:32 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:53.257 11:06:32 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:53.257 11:06:32 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:53.257 11:06:32 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:53.257 11:06:32 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:53.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:53.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:38:53.257 00:38:53.257 --- 10.0.0.2 ping statistics --- 00:38:53.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:53.257 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:38:53.257 11:06:32 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:53.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:53.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:38:53.257 00:38:53.257 --- 10.0.0.1 ping statistics --- 00:38:53.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:53.257 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:38:53.257 11:06:32 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:53.257 11:06:32 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:38:53.257 11:06:32 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:53.257 11:06:32 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:53.257 11:06:32 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:53.257 11:06:32 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:53.257 11:06:32 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:53.257 11:06:32 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:53.257 11:06:32 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:53.257 11:06:32 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:38:53.257 11:06:32 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:53.257 11:06:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:53.257 11:06:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:38:53.257 11:06:32 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:38:53.257 11:06:32 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:38:53.257 11:06:32 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:38:53.257 11:06:32 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:38:53.257 11:06:32 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:38:53.257 11:06:32 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:38:53.257 11:06:32 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:53.257 11:06:32 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:53.257 11:06:32 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:38:53.257 11:06:32 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:38:53.257 11:06:32 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:38:53.257 11:06:32 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:38:53.257 11:06:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:38:53.257 11:06:32 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:38:53.257 11:06:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:38:53.257 11:06:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:38:53.257 11:06:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:38:53.828 11:06:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:38:53.828 11:06:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:38:53.828 11:06:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:38:53.828 11:06:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:38:54.399 11:06:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:38:54.399 11:06:33 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:38:54.399 11:06:33 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:54.399 11:06:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:54.399 11:06:33 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:38:54.399 11:06:33 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:54.399 11:06:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:54.399 11:06:33 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1320061 00:38:54.399 11:06:33 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:38:54.399 11:06:33 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:54.399 11:06:33 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1320061 00:38:54.399 11:06:33 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1320061 ']' 00:38:54.399 11:06:33 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:54.399 11:06:33 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:54.399 11:06:33 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:54.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:54.399 11:06:33 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:54.399 11:06:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:54.399 [2024-11-19 11:06:33.513548] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:38:54.399 [2024-11-19 11:06:33.513601] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:54.659 [2024-11-19 11:06:33.608452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:54.659 [2024-11-19 11:06:33.648366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:54.659 [2024-11-19 11:06:33.648400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:54.659 [2024-11-19 11:06:33.648408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:54.659 [2024-11-19 11:06:33.648416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:54.659 [2024-11-19 11:06:33.648422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:54.659 [2024-11-19 11:06:33.649969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:54.659 [2024-11-19 11:06:33.650120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:54.659 [2024-11-19 11:06:33.650271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:54.659 [2024-11-19 11:06:33.650403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:55.230 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:55.230 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:38:55.230 11:06:34 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:38:55.230 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.230 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:55.230 INFO: Log level set to 20 00:38:55.230 INFO: Requests: 00:38:55.230 { 00:38:55.230 "jsonrpc": "2.0", 00:38:55.230 "method": "nvmf_set_config", 00:38:55.230 "id": 1, 00:38:55.230 "params": { 00:38:55.230 "admin_cmd_passthru": { 00:38:55.230 "identify_ctrlr": true 00:38:55.230 } 00:38:55.230 } 00:38:55.230 } 00:38:55.230 00:38:55.230 INFO: response: 00:38:55.230 { 00:38:55.230 "jsonrpc": "2.0", 00:38:55.230 "id": 1, 00:38:55.230 "result": true 00:38:55.230 } 00:38:55.230 00:38:55.231 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.231 11:06:34 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:38:55.231 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.231 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:55.231 INFO: Setting log level to 20 00:38:55.231 INFO: Setting log level to 20 00:38:55.231 INFO: Log level set to 20 00:38:55.231 INFO: Log level set to 20 00:38:55.231 INFO: Requests: 00:38:55.231 { 00:38:55.231 "jsonrpc": "2.0", 00:38:55.231 "method": "framework_start_init", 00:38:55.231 "id": 1 00:38:55.231 } 00:38:55.231 00:38:55.231 INFO: Requests: 00:38:55.231 { 00:38:55.231 "jsonrpc": "2.0", 00:38:55.231 "method": "framework_start_init", 00:38:55.231 "id": 1 00:38:55.231 } 00:38:55.231 00:38:55.491 [2024-11-19 11:06:34.432710] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:38:55.491 INFO: response: 00:38:55.491 { 00:38:55.491 "jsonrpc": "2.0", 00:38:55.491 "id": 1, 00:38:55.491 "result": true 00:38:55.491 } 00:38:55.491 00:38:55.491 INFO: response: 00:38:55.491 { 00:38:55.491 "jsonrpc": "2.0", 00:38:55.491 "id": 1, 00:38:55.491 "result": true 00:38:55.491 } 00:38:55.492 00:38:55.492 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.492 11:06:34 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:55.492 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.492 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:55.492 INFO: Setting log level to 40 00:38:55.492 INFO: Setting log level to 40 00:38:55.492 INFO: Setting log level to 40 00:38:55.492 [2024-11-19 11:06:34.446301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:55.492 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.492 11:06:34 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:38:55.492 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:55.492 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:55.492 11:06:34 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:38:55.492 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.492 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:55.752 Nvme0n1 00:38:55.752 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.752 11:06:34 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:38:55.752 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.752 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:55.752 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.752 11:06:34 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:38:55.752 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.752 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:55.752 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.752 11:06:34 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:55.752 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.752 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:55.752 [2024-11-19 11:06:34.848178] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:55.752 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.752 11:06:34 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:38:55.752 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.752 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:55.752 [ 00:38:55.752 { 00:38:55.753 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:55.753 "subtype": "Discovery", 00:38:55.753 "listen_addresses": [], 00:38:55.753 "allow_any_host": true, 00:38:55.753 "hosts": [] 00:38:55.753 }, 00:38:55.753 { 00:38:55.753 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:55.753 "subtype": "NVMe", 00:38:55.753 "listen_addresses": [ 00:38:55.753 { 00:38:55.753 "trtype": "TCP", 00:38:55.753 "adrfam": "IPv4", 00:38:55.753 "traddr": "10.0.0.2", 00:38:55.753 "trsvcid": "4420" 00:38:55.753 } 00:38:55.753 ], 00:38:55.753 "allow_any_host": true, 00:38:55.753 "hosts": [], 00:38:55.753 "serial_number": "SPDK00000000000001", 00:38:55.753 "model_number": "SPDK bdev Controller", 00:38:55.753 "max_namespaces": 1, 00:38:55.753 "min_cntlid": 1, 00:38:55.753 "max_cntlid": 65519, 00:38:55.753 "namespaces": [ 00:38:55.753 { 00:38:55.753 "nsid": 1, 00:38:55.753 "bdev_name": "Nvme0n1", 00:38:55.753 "name": "Nvme0n1", 00:38:55.753 "nguid": "36344730526054870025384500000044", 00:38:55.753 "uuid": "36344730-5260-5487-0025-384500000044" 00:38:55.753 } 00:38:55.753 ] 00:38:55.753 } 00:38:55.753 ] 00:38:55.753 11:06:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.753 11:06:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:55.753 11:06:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:38:55.753 11:06:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:38:56.013 11:06:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:38:56.013 11:06:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:56.013 11:06:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:38:56.013 11:06:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:38:56.273 11:06:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:38:56.273 11:06:35 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:38:56.273 11:06:35 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:38:56.273 11:06:35 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:56.273 11:06:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.273 11:06:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:56.273 11:06:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.273 11:06:35 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:38:56.273 11:06:35 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:38:56.273 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:56.273 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:38:56.273 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:56.273 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:38:56.273 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:56.273 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:56.273 rmmod nvme_tcp 00:38:56.273 rmmod nvme_fabrics 00:38:56.273 rmmod nvme_keyring 00:38:56.273 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:56.273 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:38:56.273 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:38:56.273 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1320061 ']' 00:38:56.273 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1320061 00:38:56.273 11:06:35 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1320061 ']' 00:38:56.273 11:06:35 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1320061 00:38:56.273 11:06:35 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:38:56.273 11:06:35 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:56.273 11:06:35 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1320061 00:38:56.273 11:06:35 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:56.273 11:06:35 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:56.273 11:06:35 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1320061' 00:38:56.273 killing process with pid 1320061 00:38:56.273 11:06:35 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1320061 00:38:56.273 11:06:35 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1320061 00:38:56.533 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:56.533 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:56.533 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:56.533 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:38:56.533 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:38:56.533 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:56.533 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:38:56.533 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:56.533 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:56.533 11:06:35 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:56.533 11:06:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:56.533 11:06:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:59.077 11:06:37 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:59.077 00:38:59.077 real 0m12.856s 00:38:59.077 user 0m10.134s 00:38:59.077 sys 0m6.425s 00:38:59.077 11:06:37 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:59.077 11:06:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:59.077 ************************************ 00:38:59.077 END TEST nvmf_identify_passthru 00:38:59.077 ************************************ 00:38:59.077 11:06:37 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:59.077 11:06:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:59.077 11:06:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:59.077 11:06:37 -- common/autotest_common.sh@10 -- # set +x 00:38:59.077 ************************************ 00:38:59.077 START TEST nvmf_dif 00:38:59.077 ************************************ 00:38:59.077 11:06:37 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:59.077 * Looking for test storage... 00:38:59.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:59.077 11:06:37 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:59.077 11:06:37 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:38:59.077 11:06:37 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:59.078 11:06:37 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:59.078 11:06:37 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:38:59.078 11:06:37 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:59.078 11:06:37 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:59.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.078 --rc genhtml_branch_coverage=1 00:38:59.078 --rc genhtml_function_coverage=1 00:38:59.078 --rc genhtml_legend=1 00:38:59.078 --rc geninfo_all_blocks=1 00:38:59.078 --rc geninfo_unexecuted_blocks=1 00:38:59.078 00:38:59.078 ' 00:38:59.078 11:06:37 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:59.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.078 --rc genhtml_branch_coverage=1 00:38:59.078 --rc genhtml_function_coverage=1 00:38:59.078 --rc genhtml_legend=1 00:38:59.078 --rc geninfo_all_blocks=1 00:38:59.078 --rc geninfo_unexecuted_blocks=1 00:38:59.078 00:38:59.078 ' 00:38:59.078 11:06:37 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:59.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.078 --rc genhtml_branch_coverage=1 00:38:59.078 --rc genhtml_function_coverage=1 00:38:59.078 --rc genhtml_legend=1 00:38:59.078 --rc geninfo_all_blocks=1 00:38:59.078 --rc geninfo_unexecuted_blocks=1 00:38:59.078 00:38:59.078 ' 00:38:59.078 11:06:37 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:59.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.078 --rc genhtml_branch_coverage=1 00:38:59.078 --rc genhtml_function_coverage=1 00:38:59.078 --rc genhtml_legend=1 00:38:59.078 --rc geninfo_all_blocks=1 00:38:59.078 --rc geninfo_unexecuted_blocks=1 00:38:59.078 00:38:59.078 ' 00:38:59.078 11:06:37 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:59.078 11:06:37 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:38:59.078 11:06:37 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:59.078 11:06:37 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:59.078 11:06:37 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:59.078 11:06:37 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:59.078 11:06:37 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:59.078 11:06:37 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:59.078 11:06:37 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:59.078 11:06:37 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:59.078 11:06:37 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:59.078 11:06:37 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:59.078 11:06:37 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:59.078 11:06:37 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:59.078 11:06:37 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:59.078 11:06:37 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:59.078 11:06:37 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:59.078 11:06:37 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:59.078 11:06:38 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:38:59.078 11:06:38 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:59.078 11:06:38 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:59.078 11:06:38 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:59.078 11:06:38 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.078 11:06:38 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.078 11:06:38 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.078 11:06:38 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:38:59.078 11:06:38 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:59.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:59.078 11:06:38 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:38:59.078 11:06:38 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:38:59.078 11:06:38 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:38:59.078 11:06:38 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:38:59.078 11:06:38 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:59.078 11:06:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:59.078 11:06:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:59.078 11:06:38 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:38:59.078 11:06:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:07.213 11:06:44 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:07.213 11:06:44 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:39:07.213 11:06:44 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:07.214 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:07.214 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:07.214 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:07.214 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:07.214 11:06:44 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:07.214 11:06:45 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:07.214 11:06:45 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:07.214 11:06:45 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:07.214 11:06:45 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:07.214 11:06:45 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:07.214 11:06:45 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:07.214 11:06:45 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:07.214 11:06:45 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:07.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:07.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:39:07.214 00:39:07.214 --- 10.0.0.2 ping statistics --- 00:39:07.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:07.214 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:39:07.214 11:06:45 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:07.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:07.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:39:07.214 00:39:07.214 --- 10.0.0.1 ping statistics --- 00:39:07.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:07.214 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:39:07.214 11:06:45 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:07.214 11:06:45 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:39:07.214 11:06:45 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:39:07.214 11:06:45 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:09.760 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:39:09.760 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:39:09.760 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:39:09.760 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:39:09.760 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:39:09.760 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:39:09.760 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:39:09.760 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:39:09.760 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:39:09.760 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:39:09.760 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:39:09.760 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:39:09.760 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:39:09.760 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:39:09.760 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:39:09.760 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:39:09.760 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:39:09.760 11:06:48 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:09.760 11:06:48 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:09.760 11:06:48 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:09.760 11:06:48 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:09.760 11:06:48 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:09.760 11:06:48 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:09.760 11:06:48 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:39:09.760 11:06:48 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:39:09.760 11:06:48 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:09.760 11:06:48 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:09.760 11:06:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:09.760 11:06:48 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1325948 00:39:09.760 11:06:48 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1325948 00:39:09.760 11:06:48 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:39:09.760 11:06:48 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1325948 ']' 00:39:09.760 11:06:48 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:09.760 11:06:48 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:09.760 11:06:48 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:09.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:09.760 11:06:48 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:09.760 11:06:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:10.021 [2024-11-19 11:06:48.958703] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:39:10.021 [2024-11-19 11:06:48.958771] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:10.021 [2024-11-19 11:06:49.059263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:10.021 [2024-11-19 11:06:49.110299] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:10.021 [2024-11-19 11:06:49.110347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:10.021 [2024-11-19 11:06:49.110356] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:10.021 [2024-11-19 11:06:49.110362] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:10.021 [2024-11-19 11:06:49.110369] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:10.021 [2024-11-19 11:06:49.111183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:10.592 11:06:49 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:10.592 11:06:49 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:39:10.592 11:06:49 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:10.592 11:06:49 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:10.592 11:06:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:10.853 11:06:49 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:10.853 11:06:49 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:39:10.853 11:06:49 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:39:10.853 11:06:49 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.853 11:06:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:10.853 [2024-11-19 11:06:49.816525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:10.853 11:06:49 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.853 11:06:49 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:39:10.853 11:06:49 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:10.853 11:06:49 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:10.853 11:06:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:10.853 ************************************ 00:39:10.853 START TEST fio_dif_1_default 00:39:10.853 ************************************ 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:10.853 bdev_null0 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:10.853 [2024-11-19 11:06:49.908958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:10.853 { 00:39:10.853 "params": { 00:39:10.853 "name": "Nvme$subsystem", 00:39:10.853 "trtype": "$TEST_TRANSPORT", 00:39:10.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:10.853 "adrfam": "ipv4", 00:39:10.853 "trsvcid": "$NVMF_PORT", 00:39:10.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:10.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:10.853 "hdgst": ${hdgst:-false}, 00:39:10.853 "ddgst": ${ddgst:-false} 00:39:10.853 }, 00:39:10.853 "method": "bdev_nvme_attach_controller" 00:39:10.853 } 00:39:10.853 EOF 00:39:10.853 )") 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:39:10.853 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:10.854 "params": { 00:39:10.854 "name": "Nvme0", 00:39:10.854 "trtype": "tcp", 00:39:10.854 "traddr": "10.0.0.2", 00:39:10.854 "adrfam": "ipv4", 00:39:10.854 "trsvcid": "4420", 00:39:10.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:10.854 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:10.854 "hdgst": false, 00:39:10.854 "ddgst": false 00:39:10.854 }, 00:39:10.854 "method": "bdev_nvme_attach_controller" 00:39:10.854 }' 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:10.854 11:06:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:11.443 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:11.443 fio-3.35 00:39:11.443 Starting 1 thread 00:39:23.674 00:39:23.674 filename0: (groupid=0, jobs=1): err= 0: pid=1326561: Tue Nov 19 11:07:01 2024 00:39:23.674 read: IOPS=198, BW=795KiB/s (814kB/s)(7984KiB/10041msec) 00:39:23.674 slat (nsec): min=5408, max=50265, avg=7005.33, stdev=3253.76 00:39:23.674 clat (usec): min=654, max=42098, avg=20101.71, stdev=20146.65 00:39:23.674 lat (usec): min=660, max=42107, avg=20108.71, stdev=20146.20 00:39:23.674 clat percentiles (usec): 00:39:23.674 | 1.00th=[ 709], 5.00th=[ 791], 10.00th=[ 816], 20.00th=[ 832], 00:39:23.674 | 30.00th=[ 848], 40.00th=[ 914], 50.00th=[ 1057], 60.00th=[41157], 00:39:23.674 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:23.674 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:23.674 | 99.99th=[42206] 00:39:23.674 bw ( KiB/s): min= 704, max= 1152, per=100.00%, avg=796.80, stdev=100.61, samples=20 00:39:23.674 iops : min= 176, max= 288, avg=199.20, stdev=25.15, samples=20 00:39:23.674 lat (usec) : 750=2.71%, 1000=46.24% 00:39:23.674 lat (msec) : 2=3.31%, 4=0.05%, 50=47.70% 00:39:23.674 cpu : usr=93.96%, sys=5.80%, ctx=13, majf=0, minf=231 00:39:23.674 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:23.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.674 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.674 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:23.674 00:39:23.674 Run status group 0 (all jobs): 00:39:23.674 READ: bw=795KiB/s (814kB/s), 795KiB/s-795KiB/s (814kB/s-814kB/s), io=7984KiB (8176kB), run=10041-10041msec 00:39:23.674 11:07:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.675 00:39:23.675 real 0m11.414s 00:39:23.675 user 0m27.318s 00:39:23.675 sys 0m0.958s 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:23.675 ************************************ 00:39:23.675 END TEST fio_dif_1_default 00:39:23.675 ************************************ 00:39:23.675 11:07:01 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:39:23.675 11:07:01 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:23.675 11:07:01 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:23.675 11:07:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:23.675 ************************************ 00:39:23.675 START TEST fio_dif_1_multi_subsystems 00:39:23.675 ************************************ 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:23.675 bdev_null0 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:23.675 [2024-11-19 11:07:01.403135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:23.675 bdev_null1 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:23.675 { 00:39:23.675 "params": { 00:39:23.675 "name": "Nvme$subsystem", 00:39:23.675 "trtype": "$TEST_TRANSPORT", 00:39:23.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:23.675 "adrfam": "ipv4", 00:39:23.675 "trsvcid": "$NVMF_PORT", 00:39:23.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:23.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:23.675 "hdgst": ${hdgst:-false}, 00:39:23.675 "ddgst": ${ddgst:-false} 00:39:23.675 }, 00:39:23.675 "method": "bdev_nvme_attach_controller" 00:39:23.675 } 00:39:23.675 EOF 00:39:23.675 )") 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:23.675 { 00:39:23.675 "params": { 00:39:23.675 "name": "Nvme$subsystem", 00:39:23.675 "trtype": "$TEST_TRANSPORT", 00:39:23.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:23.675 "adrfam": "ipv4", 00:39:23.675 "trsvcid": "$NVMF_PORT", 00:39:23.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:23.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:23.675 "hdgst": ${hdgst:-false}, 00:39:23.675 "ddgst": ${ddgst:-false} 00:39:23.675 }, 00:39:23.675 "method": "bdev_nvme_attach_controller" 00:39:23.675 } 00:39:23.675 EOF 00:39:23.675 )") 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:39:23.675 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:23.676 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:39:23.676 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:39:23.676 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:39:23.676 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:23.676 "params": { 00:39:23.676 "name": "Nvme0", 00:39:23.676 "trtype": "tcp", 00:39:23.676 "traddr": "10.0.0.2", 00:39:23.676 "adrfam": "ipv4", 00:39:23.676 "trsvcid": "4420", 00:39:23.676 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:23.676 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:23.676 "hdgst": false, 00:39:23.676 "ddgst": false 00:39:23.676 }, 00:39:23.676 "method": "bdev_nvme_attach_controller" 00:39:23.676 },{ 00:39:23.676 "params": { 00:39:23.676 "name": "Nvme1", 00:39:23.676 "trtype": "tcp", 00:39:23.676 "traddr": "10.0.0.2", 00:39:23.676 "adrfam": "ipv4", 00:39:23.676 "trsvcid": "4420", 00:39:23.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:23.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:23.676 "hdgst": false, 00:39:23.676 "ddgst": false 00:39:23.676 }, 00:39:23.676 "method": "bdev_nvme_attach_controller" 00:39:23.676 }' 00:39:23.676 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:23.676 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:23.676 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:23.676 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:23.676 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:23.676 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:23.676 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:23.676 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:23.676 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:23.676 11:07:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:23.676 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:23.676 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:23.676 fio-3.35 00:39:23.676 Starting 2 threads 00:39:33.785 00:39:33.785 filename0: (groupid=0, jobs=1): err= 0: pid=1328967: Tue Nov 19 11:07:12 2024 00:39:33.785 read: IOPS=190, BW=763KiB/s (781kB/s)(7632KiB/10001msec) 00:39:33.785 slat (nsec): min=5403, max=29193, avg=6210.89, stdev=1542.09 00:39:33.785 clat (usec): min=567, max=42366, avg=20948.23, stdev=20151.42 00:39:33.785 lat (usec): min=575, max=42372, avg=20954.44, stdev=20151.38 00:39:33.785 clat percentiles (usec): 00:39:33.785 | 1.00th=[ 619], 5.00th=[ 799], 10.00th=[ 816], 20.00th=[ 832], 00:39:33.785 | 30.00th=[ 848], 40.00th=[ 865], 50.00th=[ 1876], 60.00th=[41157], 00:39:33.785 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:33.785 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:39:33.785 | 99.99th=[42206] 00:39:33.785 bw ( KiB/s): min= 704, max= 768, per=50.20%, avg=764.63, stdev=14.68, samples=19 00:39:33.785 iops : min= 176, max= 192, avg=191.16, stdev= 3.67, samples=19 00:39:33.785 lat (usec) : 750=1.68%, 1000=47.27% 00:39:33.785 lat (msec) : 2=1.15%, 50=49.90% 00:39:33.785 cpu : usr=95.69%, sys=4.09%, ctx=32, majf=0, minf=104 00:39:33.785 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.785 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.785 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:33.785 filename1: (groupid=0, jobs=1): err= 0: pid=1328968: Tue Nov 19 11:07:12 2024 00:39:33.785 read: IOPS=190, BW=762KiB/s (780kB/s)(7648KiB/10040msec) 00:39:33.785 slat (nsec): min=5406, max=27946, avg=6191.90, stdev=1291.69 00:39:33.785 clat (usec): min=534, max=42412, avg=20986.17, stdev=20178.43 00:39:33.785 lat (usec): min=540, max=42440, avg=20992.37, stdev=20178.41 00:39:33.785 clat percentiles (usec): 00:39:33.785 | 1.00th=[ 660], 5.00th=[ 766], 10.00th=[ 783], 20.00th=[ 807], 00:39:33.785 | 30.00th=[ 832], 40.00th=[ 848], 50.00th=[ 930], 60.00th=[41157], 00:39:33.785 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:33.785 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:39:33.785 | 99.99th=[42206] 00:39:33.785 bw ( KiB/s): min= 672, max= 832, per=50.13%, avg=763.20, stdev=28.00, samples=20 00:39:33.785 iops : min= 168, max= 208, avg=190.80, stdev= 7.00, samples=20 00:39:33.785 lat (usec) : 750=2.82%, 1000=47.18% 00:39:33.785 lat (msec) : 50=50.00% 00:39:33.785 cpu : usr=95.34%, sys=4.46%, ctx=14, majf=0, minf=156 00:39:33.785 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.785 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.785 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:33.785 00:39:33.785 Run status group 0 (all jobs): 00:39:33.785 READ: bw=1522KiB/s (1558kB/s), 762KiB/s-763KiB/s (780kB/s-781kB/s), io=14.9MiB (15.6MB), run=10001-10040msec 00:39:33.785 11:07:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:39:33.785 11:07:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:39:33.785 11:07:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:33.785 11:07:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:33.785 11:07:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:39:33.785 11:07:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:33.785 11:07:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.785 11:07:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:34.046 11:07:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.046 11:07:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:34.046 11:07:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.046 11:07:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:34.046 11:07:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.046 11:07:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:34.046 11:07:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:34.046 11:07:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:39:34.046 11:07:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:34.046 11:07:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.046 11:07:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:34.046 11:07:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.046 11:07:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:34.046 11:07:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.046 11:07:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:34.046 11:07:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.046 00:39:34.046 real 0m11.657s 00:39:34.046 user 0m38.438s 00:39:34.046 sys 0m1.218s 00:39:34.046 11:07:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:34.046 11:07:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:34.046 ************************************ 00:39:34.046 END TEST fio_dif_1_multi_subsystems 00:39:34.046 ************************************ 00:39:34.046 11:07:13 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:39:34.046 11:07:13 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:34.046 11:07:13 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:34.046 11:07:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:34.046 ************************************ 00:39:34.046 START TEST fio_dif_rand_params 00:39:34.046 ************************************ 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:34.046 bdev_null0 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:34.046 [2024-11-19 11:07:13.139410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:34.046 { 00:39:34.046 "params": { 00:39:34.046 "name": "Nvme$subsystem", 00:39:34.046 "trtype": "$TEST_TRANSPORT", 00:39:34.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:34.046 "adrfam": "ipv4", 00:39:34.046 "trsvcid": "$NVMF_PORT", 00:39:34.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:34.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:34.046 "hdgst": ${hdgst:-false}, 00:39:34.046 "ddgst": ${ddgst:-false} 00:39:34.046 }, 00:39:34.046 "method": "bdev_nvme_attach_controller" 00:39:34.046 } 00:39:34.046 EOF 00:39:34.046 )") 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:34.046 "params": { 00:39:34.046 "name": "Nvme0", 00:39:34.046 "trtype": "tcp", 00:39:34.046 "traddr": "10.0.0.2", 00:39:34.046 "adrfam": "ipv4", 00:39:34.046 "trsvcid": "4420", 00:39:34.046 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:34.046 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:34.046 "hdgst": false, 00:39:34.046 "ddgst": false 00:39:34.046 }, 00:39:34.046 "method": "bdev_nvme_attach_controller" 00:39:34.046 }' 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:34.046 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:34.047 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:34.047 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:34.047 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:34.047 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:34.047 11:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:34.615 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:34.616 ... 00:39:34.616 fio-3.35 00:39:34.616 Starting 3 threads 00:39:41.198 00:39:41.198 filename0: (groupid=0, jobs=1): err= 0: pid=1331176: Tue Nov 19 11:07:19 2024 00:39:41.198 read: IOPS=346, BW=43.3MiB/s (45.4MB/s)(218MiB/5045msec) 00:39:41.198 slat (nsec): min=5414, max=65009, avg=6646.08, stdev=1940.04 00:39:41.198 clat (usec): min=4122, max=51070, avg=8651.56, stdev=5201.81 00:39:41.198 lat (usec): min=4128, max=51076, avg=8658.21, stdev=5201.99 00:39:41.198 clat percentiles (usec): 00:39:41.198 | 1.00th=[ 4686], 5.00th=[ 5735], 10.00th=[ 6063], 20.00th=[ 6652], 00:39:41.198 | 30.00th=[ 7111], 40.00th=[ 7504], 50.00th=[ 7898], 60.00th=[ 8291], 00:39:41.198 | 70.00th=[ 8979], 80.00th=[ 9634], 90.00th=[10290], 95.00th=[10945], 00:39:41.198 | 99.00th=[47973], 99.50th=[48497], 99.90th=[50070], 99.95th=[51119], 00:39:41.198 | 99.99th=[51119] 00:39:41.198 bw ( KiB/s): min=38400, max=49408, per=37.04%, avg=44646.40, stdev=3475.07, samples=10 00:39:41.198 iops : min= 300, max= 386, avg=348.80, stdev=27.15, samples=10 00:39:41.198 lat (msec) : 10=85.92%, 20=12.54%, 50=1.43%, 100=0.11% 00:39:41.198 cpu : usr=93.74%, sys=5.99%, ctx=11, majf=0, minf=217 00:39:41.198 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:41.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.198 issued rwts: total=1747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.198 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:41.198 filename0: (groupid=0, jobs=1): err= 0: pid=1331177: Tue Nov 19 11:07:19 2024 00:39:41.198 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(177MiB/5046msec) 00:39:41.198 slat (nsec): min=5441, max=37192, avg=7478.38, stdev=1889.24 00:39:41.198 clat (usec): min=3963, max=90838, avg=10627.05, stdev=12082.94 00:39:41.198 lat (usec): min=3971, max=90844, avg=10634.53, stdev=12082.88 00:39:41.198 clat percentiles (usec): 00:39:41.198 | 1.00th=[ 4817], 5.00th=[ 5473], 10.00th=[ 5932], 20.00th=[ 6390], 00:39:41.198 | 30.00th=[ 6783], 40.00th=[ 7046], 50.00th=[ 7308], 60.00th=[ 7635], 00:39:41.198 | 70.00th=[ 7963], 80.00th=[ 8455], 90.00th=[ 9503], 95.00th=[47973], 00:39:41.198 | 99.00th=[50594], 99.50th=[51643], 99.90th=[90702], 99.95th=[90702], 00:39:41.198 | 99.99th=[90702] 00:39:41.198 bw ( KiB/s): min=22272, max=47616, per=30.09%, avg=36275.20, stdev=7566.38, samples=10 00:39:41.198 iops : min= 174, max= 372, avg=283.40, stdev=59.11, samples=10 00:39:41.198 lat (msec) : 4=0.07%, 10=91.40%, 20=0.85%, 50=6.34%, 100=1.34% 00:39:41.198 cpu : usr=93.46%, sys=6.26%, ctx=14, majf=0, minf=45 00:39:41.198 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:41.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.198 issued rwts: total=1419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.198 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:41.198 filename0: (groupid=0, jobs=1): err= 0: pid=1331178: Tue Nov 19 11:07:19 2024 00:39:41.198 read: IOPS=314, BW=39.3MiB/s (41.2MB/s)(198MiB/5043msec) 00:39:41.198 slat (nsec): min=5402, max=31915, avg=6643.42, stdev=1679.65 00:39:41.198 clat (usec): min=3575, max=51597, avg=9504.30, stdev=6265.92 00:39:41.198 lat (usec): min=3584, max=51606, avg=9510.94, stdev=6266.00 00:39:41.198 clat percentiles (usec): 00:39:41.198 | 1.00th=[ 4948], 5.00th=[ 5932], 10.00th=[ 6390], 20.00th=[ 6915], 00:39:41.198 | 30.00th=[ 7439], 40.00th=[ 7963], 50.00th=[ 8586], 60.00th=[ 9110], 00:39:41.198 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[11207], 95.00th=[11731], 00:39:41.198 | 99.00th=[47973], 99.50th=[49546], 99.90th=[51643], 99.95th=[51643], 00:39:41.198 | 99.99th=[51643] 00:39:41.198 bw ( KiB/s): min=33792, max=47360, per=33.64%, avg=40550.40, stdev=4582.96, samples=10 00:39:41.198 iops : min= 264, max= 370, avg=316.80, stdev=35.64, samples=10 00:39:41.198 lat (msec) : 4=0.25%, 10=73.01%, 20=24.34%, 50=2.21%, 100=0.19% 00:39:41.198 cpu : usr=92.92%, sys=6.82%, ctx=12, majf=0, minf=122 00:39:41.198 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:41.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.198 issued rwts: total=1586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.198 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:41.198 00:39:41.198 Run status group 0 (all jobs): 00:39:41.198 READ: bw=118MiB/s (123MB/s), 35.2MiB/s-43.3MiB/s (36.9MB/s-45.4MB/s), io=594MiB (623MB), run=5043-5046msec 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:41.198 bdev_null0 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.198 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:41.199 [2024-11-19 11:07:19.433433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:41.199 bdev_null1 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:41.199 bdev_null2 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:41.199 { 00:39:41.199 "params": { 00:39:41.199 "name": "Nvme$subsystem", 00:39:41.199 "trtype": "$TEST_TRANSPORT", 00:39:41.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:41.199 "adrfam": "ipv4", 00:39:41.199 "trsvcid": "$NVMF_PORT", 00:39:41.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:41.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:41.199 "hdgst": ${hdgst:-false}, 00:39:41.199 "ddgst": ${ddgst:-false} 00:39:41.199 }, 00:39:41.199 "method": "bdev_nvme_attach_controller" 00:39:41.199 } 00:39:41.199 EOF 00:39:41.199 )") 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:41.199 { 00:39:41.199 "params": { 00:39:41.199 "name": "Nvme$subsystem", 00:39:41.199 "trtype": "$TEST_TRANSPORT", 00:39:41.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:41.199 "adrfam": "ipv4", 00:39:41.199 "trsvcid": "$NVMF_PORT", 00:39:41.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:41.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:41.199 "hdgst": ${hdgst:-false}, 00:39:41.199 "ddgst": ${ddgst:-false} 00:39:41.199 }, 00:39:41.199 "method": "bdev_nvme_attach_controller" 00:39:41.199 } 00:39:41.199 EOF 00:39:41.199 )") 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:41.199 { 00:39:41.199 "params": { 00:39:41.199 "name": "Nvme$subsystem", 00:39:41.199 "trtype": "$TEST_TRANSPORT", 00:39:41.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:41.199 "adrfam": "ipv4", 00:39:41.199 "trsvcid": "$NVMF_PORT", 00:39:41.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:41.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:41.199 "hdgst": ${hdgst:-false}, 00:39:41.199 "ddgst": ${ddgst:-false} 00:39:41.199 }, 00:39:41.199 "method": "bdev_nvme_attach_controller" 00:39:41.199 } 00:39:41.199 EOF 00:39:41.199 )") 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:39:41.199 11:07:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:41.199 "params": { 00:39:41.199 "name": "Nvme0", 00:39:41.199 "trtype": "tcp", 00:39:41.199 "traddr": "10.0.0.2", 00:39:41.199 "adrfam": "ipv4", 00:39:41.199 "trsvcid": "4420", 00:39:41.199 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:41.199 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:41.199 "hdgst": false, 00:39:41.199 "ddgst": false 00:39:41.199 }, 00:39:41.199 "method": "bdev_nvme_attach_controller" 00:39:41.199 },{ 00:39:41.199 "params": { 00:39:41.199 "name": "Nvme1", 00:39:41.199 "trtype": "tcp", 00:39:41.199 "traddr": "10.0.0.2", 00:39:41.199 "adrfam": "ipv4", 00:39:41.199 "trsvcid": "4420", 00:39:41.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:41.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:41.199 "hdgst": false, 00:39:41.199 "ddgst": false 00:39:41.199 }, 00:39:41.199 "method": "bdev_nvme_attach_controller" 00:39:41.199 },{ 00:39:41.199 "params": { 00:39:41.199 "name": "Nvme2", 00:39:41.199 "trtype": "tcp", 00:39:41.199 "traddr": "10.0.0.2", 00:39:41.200 "adrfam": "ipv4", 00:39:41.200 "trsvcid": "4420", 00:39:41.200 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:39:41.200 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:39:41.200 "hdgst": false, 00:39:41.200 "ddgst": false 00:39:41.200 }, 00:39:41.200 "method": "bdev_nvme_attach_controller" 00:39:41.200 }' 00:39:41.200 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:41.200 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:41.200 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:41.200 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:41.200 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:41.200 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:41.200 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:41.200 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:41.200 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:41.200 11:07:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:41.200 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:41.200 ... 00:39:41.200 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:41.200 ... 00:39:41.200 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:41.200 ... 00:39:41.200 fio-3.35 00:39:41.200 Starting 24 threads 00:39:53.428 00:39:53.428 filename0: (groupid=0, jobs=1): err= 0: pid=1332673: Tue Nov 19 11:07:30 2024 00:39:53.428 read: IOPS=673, BW=2695KiB/s (2760kB/s)(26.3MiB/10006msec) 00:39:53.428 slat (usec): min=5, max=124, avg=11.37, stdev= 9.38 00:39:53.428 clat (usec): min=8368, max=30220, avg=23652.37, stdev=1737.25 00:39:53.428 lat (usec): min=8376, max=30227, avg=23663.74, stdev=1734.88 00:39:53.428 clat percentiles (usec): 00:39:53.428 | 1.00th=[13042], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:39:53.429 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:39:53.429 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:39:53.429 | 99.00th=[25560], 99.50th=[25560], 99.90th=[29754], 99.95th=[30016], 00:39:53.429 | 99.99th=[30278] 00:39:53.429 bw ( KiB/s): min= 2560, max= 3120, per=4.18%, avg=2690.40, stdev=111.37, samples=20 00:39:53.429 iops : min= 640, max= 780, avg=672.60, stdev=27.84, samples=20 00:39:53.429 lat (msec) : 10=0.30%, 20=2.37%, 50=97.33% 00:39:53.429 cpu : usr=98.92%, sys=0.81%, ctx=38, majf=0, minf=55 00:39:53.429 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:53.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.429 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.429 issued rwts: total=6742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.429 filename0: (groupid=0, jobs=1): err= 0: pid=1332674: Tue Nov 19 11:07:30 2024 00:39:53.429 read: IOPS=679, BW=2719KiB/s (2785kB/s)(26.6MiB/10005msec) 00:39:53.429 slat (usec): min=5, max=118, avg=11.80, stdev= 9.11 00:39:53.429 clat (usec): min=7809, max=40096, avg=23441.19, stdev=2293.60 00:39:53.429 lat (usec): min=7819, max=40110, avg=23453.00, stdev=2292.82 00:39:53.429 clat percentiles (usec): 00:39:53.429 | 1.00th=[11994], 5.00th=[19268], 10.00th=[23200], 20.00th=[23462], 00:39:53.429 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:39:53.429 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:39:53.429 | 99.00th=[26870], 99.50th=[27919], 99.90th=[35390], 99.95th=[37487], 00:39:53.429 | 99.99th=[40109] 00:39:53.429 bw ( KiB/s): min= 2560, max= 3168, per=4.21%, avg=2714.40, stdev=138.93, samples=20 00:39:53.429 iops : min= 640, max= 792, avg=678.60, stdev=34.73, samples=20 00:39:53.429 lat (msec) : 10=0.50%, 20=4.90%, 50=94.60% 00:39:53.429 cpu : usr=99.00%, sys=0.74%, ctx=15, majf=0, minf=18 00:39:53.429 IO depths : 1=4.7%, 2=10.5%, 4=23.5%, 8=53.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:39:53.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.429 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.429 issued rwts: total=6802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.429 filename0: (groupid=0, jobs=1): err= 0: pid=1332675: Tue Nov 19 11:07:30 2024 00:39:53.429 read: IOPS=670, BW=2682KiB/s (2747kB/s)(26.2MiB/10003msec) 00:39:53.429 slat (nsec): min=5570, max=82044, avg=16552.02, stdev=10663.13 00:39:53.429 clat (usec): min=6096, max=41336, avg=23705.42, stdev=1797.03 00:39:53.429 lat (usec): min=6122, max=41363, avg=23721.97, stdev=1797.24 00:39:53.429 clat percentiles (usec): 00:39:53.429 | 1.00th=[15926], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:39:53.429 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:39:53.429 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:39:53.429 | 99.00th=[26346], 99.50th=[31589], 99.90th=[39584], 99.95th=[39584], 00:39:53.429 | 99.99th=[41157] 00:39:53.429 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2661.32, stdev=53.10, samples=19 00:39:53.429 iops : min= 640, max= 672, avg=665.32, stdev=13.30, samples=19 00:39:53.429 lat (msec) : 10=0.27%, 20=1.97%, 50=97.76% 00:39:53.429 cpu : usr=98.88%, sys=0.82%, ctx=43, majf=0, minf=26 00:39:53.429 IO depths : 1=5.6%, 2=11.7%, 4=24.6%, 8=51.2%, 16=6.9%, 32=0.0%, >=64=0.0% 00:39:53.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.429 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.429 issued rwts: total=6708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.429 filename0: (groupid=0, jobs=1): err= 0: pid=1332676: Tue Nov 19 11:07:30 2024 00:39:53.429 read: IOPS=671, BW=2685KiB/s (2749kB/s)(26.3MiB/10014msec) 00:39:53.429 slat (usec): min=5, max=106, avg=24.83, stdev=18.29 00:39:53.429 clat (usec): min=11437, max=39445, avg=23633.69, stdev=1872.20 00:39:53.429 lat (usec): min=11443, max=39474, avg=23658.52, stdev=1872.71 00:39:53.429 clat percentiles (usec): 00:39:53.429 | 1.00th=[15401], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:39:53.429 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:39:53.429 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:39:53.429 | 99.00th=[30278], 99.50th=[31851], 99.90th=[36439], 99.95th=[36439], 00:39:53.429 | 99.99th=[39584] 00:39:53.429 bw ( KiB/s): min= 2560, max= 2784, per=4.16%, avg=2682.11, stdev=49.23, samples=19 00:39:53.429 iops : min= 640, max= 696, avg=670.53, stdev=12.31, samples=19 00:39:53.429 lat (msec) : 20=3.57%, 50=96.43% 00:39:53.429 cpu : usr=98.68%, sys=0.88%, ctx=82, majf=0, minf=30 00:39:53.429 IO depths : 1=5.6%, 2=11.4%, 4=23.7%, 8=52.3%, 16=7.0%, 32=0.0%, >=64=0.0% 00:39:53.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.429 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.429 issued rwts: total=6722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.429 filename0: (groupid=0, jobs=1): err= 0: pid=1332677: Tue Nov 19 11:07:30 2024 00:39:53.429 read: IOPS=673, BW=2693KiB/s (2758kB/s)(26.3MiB/10005msec) 00:39:53.429 slat (nsec): min=5646, max=89971, avg=15051.77, stdev=12173.55 00:39:53.429 clat (usec): min=4429, max=33995, avg=23637.26, stdev=1809.49 00:39:53.429 lat (usec): min=4437, max=34001, avg=23652.31, stdev=1809.03 00:39:53.429 clat percentiles (usec): 00:39:53.429 | 1.00th=[11863], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:39:53.429 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:39:53.429 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:39:53.429 | 99.00th=[25297], 99.50th=[25560], 99.90th=[26870], 99.95th=[33817], 00:39:53.429 | 99.99th=[33817] 00:39:53.429 bw ( KiB/s): min= 2560, max= 3078, per=4.17%, avg=2688.30, stdev=102.92, samples=20 00:39:53.429 iops : min= 640, max= 769, avg=672.05, stdev=25.63, samples=20 00:39:53.429 lat (msec) : 10=0.48%, 20=1.48%, 50=98.04% 00:39:53.429 cpu : usr=98.64%, sys=0.96%, ctx=38, majf=0, minf=33 00:39:53.429 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:53.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.429 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.429 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.429 filename0: (groupid=0, jobs=1): err= 0: pid=1332678: Tue Nov 19 11:07:30 2024 00:39:53.429 read: IOPS=669, BW=2677KiB/s (2741kB/s)(26.1MiB/10002msec) 00:39:53.429 slat (nsec): min=4629, max=85303, avg=16420.45, stdev=11362.00 00:39:53.429 clat (usec): min=11257, max=40826, avg=23766.31, stdev=1580.04 00:39:53.429 lat (usec): min=11263, max=40839, avg=23782.73, stdev=1579.84 00:39:53.429 clat percentiles (usec): 00:39:53.429 | 1.00th=[15795], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:39:53.429 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:39:53.429 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:39:53.429 | 99.00th=[25297], 99.50th=[31589], 99.90th=[40633], 99.95th=[40633], 00:39:53.429 | 99.99th=[40633] 00:39:53.429 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2661.05, stdev=53.61, samples=19 00:39:53.429 iops : min= 640, max= 672, avg=665.26, stdev=13.40, samples=19 00:39:53.429 lat (msec) : 20=1.76%, 50=98.24% 00:39:53.429 cpu : usr=98.93%, sys=0.79%, ctx=68, majf=0, minf=35 00:39:53.429 IO depths : 1=5.9%, 2=12.0%, 4=24.8%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:39:53.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.429 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.429 issued rwts: total=6694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.429 filename0: (groupid=0, jobs=1): err= 0: pid=1332679: Tue Nov 19 11:07:30 2024 00:39:53.429 read: IOPS=667, BW=2671KiB/s (2735kB/s)(26.1MiB/10004msec) 00:39:53.429 slat (nsec): min=5038, max=82898, avg=17853.40, stdev=12461.91 00:39:53.429 clat (usec): min=5750, max=39943, avg=23802.69, stdev=2510.98 00:39:53.429 lat (usec): min=5761, max=39959, avg=23820.54, stdev=2511.62 00:39:53.429 clat percentiles (usec): 00:39:53.429 | 1.00th=[13566], 5.00th=[22676], 10.00th=[23200], 20.00th=[23462], 00:39:53.429 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:39:53.429 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:39:53.429 | 99.00th=[36963], 99.50th=[37487], 99.90th=[40109], 99.95th=[40109], 00:39:53.429 | 99.99th=[40109] 00:39:53.429 bw ( KiB/s): min= 2560, max= 2736, per=4.14%, avg=2664.42, stdev=58.51, samples=19 00:39:53.429 iops : min= 640, max= 684, avg=666.11, stdev=14.63, samples=19 00:39:53.429 lat (msec) : 10=0.24%, 20=3.08%, 50=96.68% 00:39:53.429 cpu : usr=98.79%, sys=0.88%, ctx=43, majf=0, minf=23 00:39:53.429 IO depths : 1=3.8%, 2=9.5%, 4=23.2%, 8=54.7%, 16=8.7%, 32=0.0%, >=64=0.0% 00:39:53.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.429 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.429 issued rwts: total=6680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.429 filename0: (groupid=0, jobs=1): err= 0: pid=1332680: Tue Nov 19 11:07:30 2024 00:39:53.429 read: IOPS=668, BW=2672KiB/s (2736kB/s)(26.1MiB/10011msec) 00:39:53.429 slat (nsec): min=5626, max=84241, avg=18635.32, stdev=10562.98 00:39:53.429 clat (usec): min=11068, max=33475, avg=23787.87, stdev=1339.21 00:39:53.429 lat (usec): min=11073, max=33481, avg=23806.51, stdev=1339.37 00:39:53.429 clat percentiles (usec): 00:39:53.429 | 1.00th=[17695], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:39:53.429 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:39:53.429 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:39:53.429 | 99.00th=[28967], 99.50th=[29754], 99.90th=[30540], 99.95th=[32113], 00:39:53.429 | 99.99th=[33424] 00:39:53.429 bw ( KiB/s): min= 2560, max= 2704, per=4.13%, avg=2661.05, stdev=54.14, samples=19 00:39:53.429 iops : min= 640, max= 676, avg=665.26, stdev=13.54, samples=19 00:39:53.429 lat (msec) : 20=1.67%, 50=98.33% 00:39:53.429 cpu : usr=98.92%, sys=0.82%, ctx=27, majf=0, minf=24 00:39:53.430 IO depths : 1=5.5%, 2=11.5%, 4=24.2%, 8=51.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:39:53.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.430 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.430 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.430 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.430 filename1: (groupid=0, jobs=1): err= 0: pid=1332681: Tue Nov 19 11:07:30 2024 00:39:53.430 read: IOPS=680, BW=2722KiB/s (2787kB/s)(26.6MiB/10003msec) 00:39:53.430 slat (usec): min=5, max=102, avg=15.71, stdev=14.38 00:39:53.430 clat (usec): min=7838, max=59411, avg=23421.23, stdev=3724.90 00:39:53.430 lat (usec): min=7844, max=59427, avg=23436.94, stdev=3726.03 00:39:53.430 clat percentiles (usec): 00:39:53.430 | 1.00th=[13829], 5.00th=[16188], 10.00th=[18482], 20.00th=[22152], 00:39:53.430 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:39:53.430 | 70.00th=[23987], 80.00th=[24511], 90.00th=[26870], 95.00th=[29492], 00:39:53.430 | 99.00th=[36439], 99.50th=[38536], 99.90th=[41681], 99.95th=[41681], 00:39:53.430 | 99.99th=[59507] 00:39:53.430 bw ( KiB/s): min= 2549, max= 2848, per=4.22%, avg=2717.74, stdev=81.74, samples=19 00:39:53.430 iops : min= 637, max= 712, avg=679.42, stdev=20.46, samples=19 00:39:53.430 lat (msec) : 10=0.06%, 20=14.00%, 50=85.91%, 100=0.03% 00:39:53.430 cpu : usr=98.93%, sys=0.78%, ctx=51, majf=0, minf=28 00:39:53.430 IO depths : 1=1.1%, 2=2.7%, 4=8.2%, 8=73.8%, 16=14.2%, 32=0.0%, >=64=0.0% 00:39:53.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.430 complete : 0=0.0%, 4=90.2%, 8=6.9%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.430 issued rwts: total=6806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.430 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.430 filename1: (groupid=0, jobs=1): err= 0: pid=1332682: Tue Nov 19 11:07:30 2024 00:39:53.430 read: IOPS=669, BW=2677KiB/s (2742kB/s)(26.1MiB/10001msec) 00:39:53.430 slat (nsec): min=5642, max=91556, avg=24367.42, stdev=14928.58 00:39:53.430 clat (usec): min=8919, max=35213, avg=23681.51, stdev=1632.72 00:39:53.430 lat (usec): min=8928, max=35219, avg=23705.88, stdev=1632.54 00:39:53.430 clat percentiles (usec): 00:39:53.430 | 1.00th=[13698], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:39:53.430 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:39:53.430 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:39:53.430 | 99.00th=[26346], 99.50th=[29754], 99.90th=[35390], 99.95th=[35390], 00:39:53.430 | 99.99th=[35390] 00:39:53.430 bw ( KiB/s): min= 2560, max= 2992, per=4.16%, avg=2677.05, stdev=93.00, samples=19 00:39:53.430 iops : min= 640, max= 748, avg=669.26, stdev=23.25, samples=19 00:39:53.430 lat (msec) : 10=0.36%, 20=1.02%, 50=98.63% 00:39:53.430 cpu : usr=98.90%, sys=0.83%, ctx=21, majf=0, minf=32 00:39:53.430 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:53.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.430 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.430 issued rwts: total=6694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.430 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.430 filename1: (groupid=0, jobs=1): err= 0: pid=1332683: Tue Nov 19 11:07:30 2024 00:39:53.430 read: IOPS=668, BW=2673KiB/s (2737kB/s)(26.1MiB/10008msec) 00:39:53.430 slat (nsec): min=5601, max=94370, avg=26848.78, stdev=17346.02 00:39:53.430 clat (usec): min=10469, max=38043, avg=23674.43, stdev=922.51 00:39:53.430 lat (usec): min=10478, max=38048, avg=23701.27, stdev=923.26 00:39:53.430 clat percentiles (usec): 00:39:53.430 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:39:53.430 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:39:53.430 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:39:53.430 | 99.00th=[25297], 99.50th=[25822], 99.90th=[26870], 99.95th=[27657], 00:39:53.430 | 99.99th=[38011] 00:39:53.430 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2667.79, stdev=47.95, samples=19 00:39:53.430 iops : min= 640, max= 672, avg=666.95, stdev=11.99, samples=19 00:39:53.430 lat (msec) : 20=0.78%, 50=99.22% 00:39:53.430 cpu : usr=99.00%, sys=0.73%, ctx=35, majf=0, minf=30 00:39:53.430 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:53.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.430 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.430 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.430 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.430 filename1: (groupid=0, jobs=1): err= 0: pid=1332684: Tue Nov 19 11:07:30 2024 00:39:53.430 read: IOPS=671, BW=2685KiB/s (2749kB/s)(26.2MiB/10004msec) 00:39:53.430 slat (nsec): min=4902, max=85130, avg=17858.71, stdev=12159.53 00:39:53.430 clat (usec): min=6797, max=40397, avg=23669.03, stdev=1973.33 00:39:53.430 lat (usec): min=6803, max=40411, avg=23686.89, stdev=1974.04 00:39:53.430 clat percentiles (usec): 00:39:53.430 | 1.00th=[15795], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:39:53.430 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:39:53.430 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:39:53.430 | 99.00th=[29754], 99.50th=[33424], 99.90th=[40109], 99.95th=[40633], 00:39:53.430 | 99.99th=[40633] 00:39:53.430 bw ( KiB/s): min= 2560, max= 2880, per=4.15%, avg=2672.00, stdev=77.65, samples=19 00:39:53.430 iops : min= 640, max= 720, avg=668.00, stdev=19.41, samples=19 00:39:53.430 lat (msec) : 10=0.09%, 20=3.40%, 50=96.51% 00:39:53.430 cpu : usr=99.07%, sys=0.66%, ctx=10, majf=0, minf=31 00:39:53.430 IO depths : 1=5.5%, 2=11.1%, 4=22.8%, 8=53.4%, 16=7.2%, 32=0.0%, >=64=0.0% 00:39:53.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.430 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.430 issued rwts: total=6714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.430 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.430 filename1: (groupid=0, jobs=1): err= 0: pid=1332685: Tue Nov 19 11:07:30 2024 00:39:53.430 read: IOPS=676, BW=2707KiB/s (2772kB/s)(26.5MiB/10011msec) 00:39:53.430 slat (nsec): min=5569, max=83020, avg=14669.90, stdev=12868.44 00:39:53.430 clat (usec): min=7330, max=49092, avg=23548.40, stdev=3986.72 00:39:53.430 lat (usec): min=7338, max=49110, avg=23563.07, stdev=3988.02 00:39:53.430 clat percentiles (usec): 00:39:53.430 | 1.00th=[13698], 5.00th=[16057], 10.00th=[18220], 20.00th=[22414], 00:39:53.430 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:39:53.430 | 70.00th=[23987], 80.00th=[24511], 90.00th=[27395], 95.00th=[30540], 00:39:53.430 | 99.00th=[37487], 99.50th=[38011], 99.90th=[49021], 99.95th=[49021], 00:39:53.430 | 99.99th=[49021] 00:39:53.430 bw ( KiB/s): min= 2432, max= 2880, per=4.18%, avg=2693.89, stdev=96.81, samples=19 00:39:53.430 iops : min= 608, max= 720, avg=673.47, stdev=24.20, samples=19 00:39:53.430 lat (msec) : 10=0.12%, 20=13.67%, 50=86.21% 00:39:53.430 cpu : usr=98.86%, sys=0.87%, ctx=14, majf=0, minf=28 00:39:53.430 IO depths : 1=1.0%, 2=2.0%, 4=7.3%, 8=75.6%, 16=14.1%, 32=0.0%, >=64=0.0% 00:39:53.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.430 complete : 0=0.0%, 4=90.0%, 8=6.8%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.430 issued rwts: total=6774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.430 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.430 filename1: (groupid=0, jobs=1): err= 0: pid=1332686: Tue Nov 19 11:07:30 2024 00:39:53.430 read: IOPS=674, BW=2699KiB/s (2764kB/s)(26.4MiB/10008msec) 00:39:53.430 slat (nsec): min=5592, max=53056, avg=12328.01, stdev=8048.34 00:39:53.430 clat (usec): min=8290, max=36142, avg=23604.69, stdev=1916.01 00:39:53.430 lat (usec): min=8298, max=36149, avg=23617.02, stdev=1916.14 00:39:53.430 clat percentiles (usec): 00:39:53.430 | 1.00th=[12125], 5.00th=[22414], 10.00th=[23462], 20.00th=[23462], 00:39:53.430 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:39:53.430 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:39:53.430 | 99.00th=[26608], 99.50th=[27395], 99.90th=[34866], 99.95th=[35914], 00:39:53.430 | 99.99th=[35914] 00:39:53.430 bw ( KiB/s): min= 2560, max= 2864, per=4.18%, avg=2695.45, stdev=67.06, samples=20 00:39:53.430 iops : min= 640, max= 716, avg=673.85, stdev=16.74, samples=20 00:39:53.430 lat (msec) : 10=0.09%, 20=3.26%, 50=96.65% 00:39:53.430 cpu : usr=98.29%, sys=1.17%, ctx=234, majf=0, minf=30 00:39:53.430 IO depths : 1=5.0%, 2=10.9%, 4=24.0%, 8=52.7%, 16=7.5%, 32=0.0%, >=64=0.0% 00:39:53.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.430 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.430 issued rwts: total=6754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.430 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.430 filename1: (groupid=0, jobs=1): err= 0: pid=1332687: Tue Nov 19 11:07:30 2024 00:39:53.430 read: IOPS=668, BW=2674KiB/s (2739kB/s)(26.1MiB/10003msec) 00:39:53.430 slat (nsec): min=5592, max=64391, avg=14515.15, stdev=9302.07 00:39:53.430 clat (usec): min=5397, max=39805, avg=23806.08, stdev=1481.57 00:39:53.430 lat (usec): min=5408, max=39821, avg=23820.59, stdev=1481.43 00:39:53.430 clat percentiles (usec): 00:39:53.430 | 1.00th=[20841], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:39:53.430 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:39:53.430 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:39:53.430 | 99.00th=[26346], 99.50th=[27919], 99.90th=[39584], 99.95th=[39584], 00:39:53.430 | 99.99th=[39584] 00:39:53.430 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2661.32, stdev=51.19, samples=19 00:39:53.430 iops : min= 640, max= 672, avg=665.32, stdev=12.82, samples=19 00:39:53.430 lat (msec) : 10=0.13%, 20=0.75%, 50=99.12% 00:39:53.430 cpu : usr=98.83%, sys=0.92%, ctx=12, majf=0, minf=39 00:39:53.430 IO depths : 1=4.7%, 2=10.9%, 4=24.9%, 8=51.7%, 16=7.8%, 32=0.0%, >=64=0.0% 00:39:53.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.430 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.430 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.430 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.430 filename1: (groupid=0, jobs=1): err= 0: pid=1332688: Tue Nov 19 11:07:30 2024 00:39:53.430 read: IOPS=674, BW=2699KiB/s (2764kB/s)(26.4MiB/10005msec) 00:39:53.430 slat (usec): min=5, max=110, avg= 8.83, stdev= 5.32 00:39:53.430 clat (usec): min=8995, max=37545, avg=23633.99, stdev=2333.57 00:39:53.430 lat (usec): min=9016, max=37554, avg=23642.82, stdev=2332.36 00:39:53.430 clat percentiles (usec): 00:39:53.431 | 1.00th=[11207], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:39:53.431 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:39:53.431 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:39:53.431 | 99.00th=[26870], 99.50th=[35914], 99.90th=[36963], 99.95th=[37487], 00:39:53.431 | 99.99th=[37487] 00:39:53.431 bw ( KiB/s): min= 2560, max= 2944, per=4.18%, avg=2694.40, stdev=76.19, samples=20 00:39:53.431 iops : min= 640, max= 736, avg=673.60, stdev=19.05, samples=20 00:39:53.431 lat (msec) : 10=0.64%, 20=2.99%, 50=96.37% 00:39:53.431 cpu : usr=98.81%, sys=0.85%, ctx=63, majf=0, minf=67 00:39:53.431 IO depths : 1=5.1%, 2=11.4%, 4=24.9%, 8=51.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:39:53.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.431 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.431 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.431 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.431 filename2: (groupid=0, jobs=1): err= 0: pid=1332689: Tue Nov 19 11:07:30 2024 00:39:53.431 read: IOPS=668, BW=2673KiB/s (2737kB/s)(26.1MiB/10007msec) 00:39:53.431 slat (nsec): min=5654, max=98592, avg=23450.67, stdev=14549.06 00:39:53.431 clat (usec): min=10711, max=32499, avg=23746.81, stdev=1319.69 00:39:53.431 lat (usec): min=10718, max=32533, avg=23770.26, stdev=1320.16 00:39:53.431 clat percentiles (usec): 00:39:53.431 | 1.00th=[16909], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:39:53.431 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:39:53.431 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:39:53.431 | 99.00th=[28705], 99.50th=[30016], 99.90th=[31327], 99.95th=[31851], 00:39:53.431 | 99.99th=[32375] 00:39:53.431 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2661.05, stdev=53.61, samples=19 00:39:53.431 iops : min= 640, max= 672, avg=665.26, stdev=13.40, samples=19 00:39:53.431 lat (msec) : 20=1.62%, 50=98.38% 00:39:53.431 cpu : usr=98.86%, sys=0.88%, ctx=17, majf=0, minf=24 00:39:53.431 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:39:53.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.431 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.431 issued rwts: total=6686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.431 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.431 filename2: (groupid=0, jobs=1): err= 0: pid=1332690: Tue Nov 19 11:07:30 2024 00:39:53.431 read: IOPS=679, BW=2716KiB/s (2781kB/s)(26.5MiB/10005msec) 00:39:53.431 slat (usec): min=5, max=120, avg=19.99, stdev=16.67 00:39:53.431 clat (usec): min=7704, max=40011, avg=23394.58, stdev=3144.55 00:39:53.431 lat (usec): min=7718, max=40017, avg=23414.57, stdev=3145.22 00:39:53.431 clat percentiles (usec): 00:39:53.431 | 1.00th=[12518], 5.00th=[16909], 10.00th=[20317], 20.00th=[23200], 00:39:53.431 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:39:53.431 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25822], 00:39:53.431 | 99.00th=[33424], 99.50th=[38011], 99.90th=[39060], 99.95th=[40109], 00:39:53.431 | 99.99th=[40109] 00:39:53.431 bw ( KiB/s): min= 2528, max= 3216, per=4.21%, avg=2715.20, stdev=140.29, samples=20 00:39:53.431 iops : min= 632, max= 804, avg=678.80, stdev=35.07, samples=20 00:39:53.431 lat (msec) : 10=0.41%, 20=8.74%, 50=90.84% 00:39:53.431 cpu : usr=98.49%, sys=1.06%, ctx=150, majf=0, minf=29 00:39:53.431 IO depths : 1=4.6%, 2=9.4%, 4=21.1%, 8=56.7%, 16=8.3%, 32=0.0%, >=64=0.0% 00:39:53.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.431 complete : 0=0.0%, 4=93.4%, 8=1.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.431 issued rwts: total=6794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.431 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.431 filename2: (groupid=0, jobs=1): err= 0: pid=1332691: Tue Nov 19 11:07:30 2024 00:39:53.431 read: IOPS=667, BW=2671KiB/s (2735kB/s)(26.1MiB/10004msec) 00:39:53.431 slat (nsec): min=5555, max=91898, avg=22052.96, stdev=15979.61 00:39:53.431 clat (usec): min=6241, max=40575, avg=23770.35, stdev=2604.31 00:39:53.431 lat (usec): min=6246, max=40588, avg=23792.41, stdev=2604.41 00:39:53.431 clat percentiles (usec): 00:39:53.431 | 1.00th=[15533], 5.00th=[19792], 10.00th=[22414], 20.00th=[23462], 00:39:53.431 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:39:53.431 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[27919], 00:39:53.431 | 99.00th=[32900], 99.50th=[35914], 99.90th=[40109], 99.95th=[40633], 00:39:53.431 | 99.99th=[40633] 00:39:53.431 bw ( KiB/s): min= 2480, max= 2800, per=4.14%, avg=2665.26, stdev=72.99, samples=19 00:39:53.431 iops : min= 620, max= 700, avg=666.32, stdev=18.25, samples=19 00:39:53.431 lat (msec) : 10=0.10%, 20=5.82%, 50=94.07% 00:39:53.431 cpu : usr=98.78%, sys=0.87%, ctx=73, majf=0, minf=41 00:39:53.431 IO depths : 1=3.5%, 2=7.1%, 4=15.6%, 8=63.2%, 16=10.6%, 32=0.0%, >=64=0.0% 00:39:53.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.431 complete : 0=0.0%, 4=91.9%, 8=3.9%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.431 issued rwts: total=6681,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.431 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.431 filename2: (groupid=0, jobs=1): err= 0: pid=1332692: Tue Nov 19 11:07:30 2024 00:39:53.431 read: IOPS=666, BW=2668KiB/s (2732kB/s)(26.1MiB/10003msec) 00:39:53.431 slat (nsec): min=5574, max=91947, avg=15552.29, stdev=13883.70 00:39:53.431 clat (usec): min=7536, max=43025, avg=23918.50, stdev=3412.49 00:39:53.431 lat (usec): min=7542, max=43032, avg=23934.05, stdev=3413.21 00:39:53.431 clat percentiles (usec): 00:39:53.431 | 1.00th=[14615], 5.00th=[17957], 10.00th=[20317], 20.00th=[23200], 00:39:53.431 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:39:53.431 | 70.00th=[24249], 80.00th=[24511], 90.00th=[27395], 95.00th=[30278], 00:39:53.431 | 99.00th=[35390], 99.50th=[39060], 99.90th=[41681], 99.95th=[41681], 00:39:53.431 | 99.99th=[43254] 00:39:53.431 bw ( KiB/s): min= 2420, max= 2848, per=4.13%, avg=2659.58, stdev=94.44, samples=19 00:39:53.431 iops : min= 605, max= 712, avg=664.89, stdev=23.61, samples=19 00:39:53.431 lat (msec) : 10=0.09%, 20=9.17%, 50=90.74% 00:39:53.431 cpu : usr=98.77%, sys=0.87%, ctx=79, majf=0, minf=26 00:39:53.431 IO depths : 1=0.2%, 2=0.4%, 4=3.2%, 8=79.9%, 16=16.3%, 32=0.0%, >=64=0.0% 00:39:53.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.431 complete : 0=0.0%, 4=89.3%, 8=8.7%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.431 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.431 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.431 filename2: (groupid=0, jobs=1): err= 0: pid=1332693: Tue Nov 19 11:07:30 2024 00:39:53.431 read: IOPS=671, BW=2686KiB/s (2751kB/s)(26.2MiB/10006msec) 00:39:53.431 slat (usec): min=5, max=114, avg=13.61, stdev= 9.48 00:39:53.431 clat (usec): min=9546, max=26979, avg=23708.54, stdev=1527.54 00:39:53.431 lat (usec): min=9563, max=26989, avg=23722.15, stdev=1525.44 00:39:53.431 clat percentiles (usec): 00:39:53.431 | 1.00th=[12780], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:39:53.431 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:39:53.431 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24511], 00:39:53.431 | 99.00th=[25297], 99.50th=[25560], 99.90th=[26870], 99.95th=[26870], 00:39:53.431 | 99.99th=[26870] 00:39:53.431 bw ( KiB/s): min= 2560, max= 2944, per=4.16%, avg=2681.60, stdev=88.00, samples=20 00:39:53.431 iops : min= 640, max= 736, avg=670.40, stdev=22.00, samples=20 00:39:53.431 lat (msec) : 10=0.43%, 20=1.00%, 50=98.57% 00:39:53.431 cpu : usr=98.23%, sys=1.19%, ctx=139, majf=0, minf=31 00:39:53.431 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:53.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.431 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.431 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.431 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.431 filename2: (groupid=0, jobs=1): err= 0: pid=1332694: Tue Nov 19 11:07:30 2024 00:39:53.431 read: IOPS=668, BW=2673KiB/s (2737kB/s)(26.1MiB/10010msec) 00:39:53.431 slat (nsec): min=5635, max=96672, avg=22281.62, stdev=16896.01 00:39:53.431 clat (usec): min=10545, max=37256, avg=23757.01, stdev=1047.73 00:39:53.431 lat (usec): min=10555, max=37267, avg=23779.29, stdev=1045.84 00:39:53.431 clat percentiles (usec): 00:39:53.431 | 1.00th=[19530], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:39:53.431 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:39:53.431 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:39:53.431 | 99.00th=[25822], 99.50th=[26608], 99.90th=[30540], 99.95th=[31065], 00:39:53.431 | 99.99th=[37487] 00:39:53.431 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2667.79, stdev=47.95, samples=19 00:39:53.431 iops : min= 640, max= 672, avg=666.95, stdev=11.99, samples=19 00:39:53.431 lat (msec) : 20=1.15%, 50=98.85% 00:39:53.431 cpu : usr=99.05%, sys=0.69%, ctx=14, majf=0, minf=32 00:39:53.431 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:39:53.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.431 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.431 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.431 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.431 filename2: (groupid=0, jobs=1): err= 0: pid=1332695: Tue Nov 19 11:07:30 2024 00:39:53.431 read: IOPS=667, BW=2670KiB/s (2734kB/s)(26.1MiB/10009msec) 00:39:53.431 slat (usec): min=4, max=104, avg=24.11, stdev=15.88 00:39:53.431 clat (usec): min=7078, max=45890, avg=23746.21, stdev=2319.59 00:39:53.431 lat (usec): min=7085, max=45903, avg=23770.32, stdev=2319.82 00:39:53.431 clat percentiles (usec): 00:39:53.431 | 1.00th=[15533], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:39:53.431 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:39:53.431 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:39:53.431 | 99.00th=[32900], 99.50th=[35390], 99.90th=[45876], 99.95th=[45876], 00:39:53.431 | 99.99th=[45876] 00:39:53.431 bw ( KiB/s): min= 2432, max= 3008, per=4.13%, avg=2658.53, stdev=111.65, samples=19 00:39:53.431 iops : min= 608, max= 752, avg=664.63, stdev=27.91, samples=19 00:39:53.431 lat (msec) : 10=0.06%, 20=3.11%, 50=96.83% 00:39:53.431 cpu : usr=98.91%, sys=0.83%, ctx=29, majf=0, minf=30 00:39:53.432 IO depths : 1=5.6%, 2=11.4%, 4=23.7%, 8=52.3%, 16=7.0%, 32=0.0%, >=64=0.0% 00:39:53.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.432 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.432 issued rwts: total=6682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.432 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.432 filename2: (groupid=0, jobs=1): err= 0: pid=1332696: Tue Nov 19 11:07:30 2024 00:39:53.432 read: IOPS=671, BW=2684KiB/s (2749kB/s)(26.2MiB/10001msec) 00:39:53.432 slat (usec): min=5, max=100, avg=20.83, stdev=17.28 00:39:53.432 clat (usec): min=5540, max=32572, avg=23676.00, stdev=1569.54 00:39:53.432 lat (usec): min=5548, max=32614, avg=23696.84, stdev=1569.03 00:39:53.432 clat percentiles (usec): 00:39:53.432 | 1.00th=[15401], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:39:53.432 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:39:53.432 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:39:53.432 | 99.00th=[25297], 99.50th=[25822], 99.90th=[30540], 99.95th=[31589], 00:39:53.432 | 99.99th=[32637] 00:39:53.432 bw ( KiB/s): min= 2560, max= 3000, per=4.17%, avg=2684.21, stdev=90.13, samples=19 00:39:53.432 iops : min= 640, max= 750, avg=671.05, stdev=22.53, samples=19 00:39:53.432 lat (msec) : 10=0.49%, 20=0.98%, 50=98.52% 00:39:53.432 cpu : usr=98.65%, sys=0.90%, ctx=151, majf=0, minf=32 00:39:53.432 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:53.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.432 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.432 issued rwts: total=6711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.432 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:53.432 00:39:53.432 Run status group 0 (all jobs): 00:39:53.432 READ: bw=62.9MiB/s (66.0MB/s), 2668KiB/s-2722KiB/s (2732kB/s-2787kB/s), io=630MiB (661MB), run=10001-10014msec 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:53.432 bdev_null0 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:53.432 [2024-11-19 11:07:31.168911] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:53.432 bdev_null1 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:53.432 11:07:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:53.432 { 00:39:53.433 "params": { 00:39:53.433 "name": "Nvme$subsystem", 00:39:53.433 "trtype": "$TEST_TRANSPORT", 00:39:53.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:53.433 "adrfam": "ipv4", 00:39:53.433 "trsvcid": "$NVMF_PORT", 00:39:53.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:53.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:53.433 "hdgst": ${hdgst:-false}, 00:39:53.433 "ddgst": ${ddgst:-false} 00:39:53.433 }, 00:39:53.433 "method": "bdev_nvme_attach_controller" 00:39:53.433 } 00:39:53.433 EOF 00:39:53.433 )") 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:53.433 { 00:39:53.433 "params": { 00:39:53.433 "name": "Nvme$subsystem", 00:39:53.433 "trtype": "$TEST_TRANSPORT", 00:39:53.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:53.433 "adrfam": "ipv4", 00:39:53.433 "trsvcid": "$NVMF_PORT", 00:39:53.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:53.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:53.433 "hdgst": ${hdgst:-false}, 00:39:53.433 "ddgst": ${ddgst:-false} 00:39:53.433 }, 00:39:53.433 "method": "bdev_nvme_attach_controller" 00:39:53.433 } 00:39:53.433 EOF 00:39:53.433 )") 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:53.433 "params": { 00:39:53.433 "name": "Nvme0", 00:39:53.433 "trtype": "tcp", 00:39:53.433 "traddr": "10.0.0.2", 00:39:53.433 "adrfam": "ipv4", 00:39:53.433 "trsvcid": "4420", 00:39:53.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:53.433 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:53.433 "hdgst": false, 00:39:53.433 "ddgst": false 00:39:53.433 }, 00:39:53.433 "method": "bdev_nvme_attach_controller" 00:39:53.433 },{ 00:39:53.433 "params": { 00:39:53.433 "name": "Nvme1", 00:39:53.433 "trtype": "tcp", 00:39:53.433 "traddr": "10.0.0.2", 00:39:53.433 "adrfam": "ipv4", 00:39:53.433 "trsvcid": "4420", 00:39:53.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:53.433 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:53.433 "hdgst": false, 00:39:53.433 "ddgst": false 00:39:53.433 }, 00:39:53.433 "method": "bdev_nvme_attach_controller" 00:39:53.433 }' 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:53.433 11:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:53.433 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:53.433 ... 00:39:53.433 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:53.433 ... 00:39:53.433 fio-3.35 00:39:53.433 Starting 4 threads 00:39:58.715 00:39:58.716 filename0: (groupid=0, jobs=1): err= 0: pid=1334998: Tue Nov 19 11:07:37 2024 00:39:58.716 read: IOPS=2973, BW=23.2MiB/s (24.4MB/s)(116MiB/5002msec) 00:39:58.716 slat (nsec): min=5395, max=68320, avg=6075.06, stdev=2049.85 00:39:58.716 clat (usec): min=1079, max=4626, avg=2674.70, stdev=222.06 00:39:58.716 lat (usec): min=1096, max=4631, avg=2680.78, stdev=222.01 00:39:58.716 clat percentiles (usec): 00:39:58.716 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2540], 00:39:58.716 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:39:58.716 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2900], 95.00th=[ 2933], 00:39:58.716 | 99.00th=[ 3621], 99.50th=[ 3851], 99.90th=[ 4178], 99.95th=[ 4293], 00:39:58.716 | 99.99th=[ 4621] 00:39:58.716 bw ( KiB/s): min=23488, max=24144, per=25.06%, avg=23827.56, stdev=219.06, samples=9 00:39:58.716 iops : min= 2936, max= 3018, avg=2978.44, stdev=27.38, samples=9 00:39:58.716 lat (msec) : 2=0.39%, 4=99.34%, 10=0.27% 00:39:58.716 cpu : usr=96.26%, sys=3.50%, ctx=6, majf=0, minf=86 00:39:58.716 IO depths : 1=0.1%, 2=0.1%, 4=70.7%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:58.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.716 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.716 issued rwts: total=14872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:58.716 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:58.716 filename0: (groupid=0, jobs=1): err= 0: pid=1334999: Tue Nov 19 11:07:37 2024 00:39:58.716 read: IOPS=2879, BW=22.5MiB/s (23.6MB/s)(113MiB/5001msec) 00:39:58.716 slat (nsec): min=5405, max=61571, avg=6107.05, stdev=2120.38 00:39:58.716 clat (usec): min=1390, max=4940, avg=2762.53, stdev=374.37 00:39:58.716 lat (usec): min=1396, max=4946, avg=2768.63, stdev=374.35 00:39:58.716 clat percentiles (usec): 00:39:58.716 | 1.00th=[ 2147], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2606], 00:39:58.716 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:39:58.716 | 70.00th=[ 2704], 80.00th=[ 2868], 90.00th=[ 3097], 95.00th=[ 3818], 00:39:58.716 | 99.00th=[ 4113], 99.50th=[ 4228], 99.90th=[ 4555], 99.95th=[ 4686], 00:39:58.716 | 99.99th=[ 4948] 00:39:58.716 bw ( KiB/s): min=22256, max=23760, per=24.16%, avg=22972.22, stdev=518.23, samples=9 00:39:58.716 iops : min= 2782, max= 2970, avg=2871.44, stdev=64.72, samples=9 00:39:58.716 lat (msec) : 2=0.53%, 4=97.01%, 10=2.46% 00:39:58.716 cpu : usr=96.32%, sys=3.46%, ctx=8, majf=0, minf=66 00:39:58.716 IO depths : 1=0.1%, 2=0.3%, 4=69.3%, 8=30.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:58.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.716 complete : 0=0.0%, 4=94.9%, 8=5.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.716 issued rwts: total=14402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:58.716 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:58.716 filename1: (groupid=0, jobs=1): err= 0: pid=1335000: Tue Nov 19 11:07:37 2024 00:39:58.716 read: IOPS=2959, BW=23.1MiB/s (24.2MB/s)(116MiB/5002msec) 00:39:58.716 slat (nsec): min=5396, max=69629, avg=5987.00, stdev=1928.57 00:39:58.716 clat (usec): min=1587, max=4758, avg=2686.57, stdev=229.18 00:39:58.716 lat (usec): min=1592, max=4783, avg=2692.55, stdev=229.27 00:39:58.716 clat percentiles (usec): 00:39:58.716 | 1.00th=[ 2245], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2540], 00:39:58.716 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:39:58.716 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2900], 95.00th=[ 2966], 00:39:58.716 | 99.00th=[ 3851], 99.50th=[ 4015], 99.90th=[ 4359], 99.95th=[ 4490], 00:39:58.716 | 99.99th=[ 4686] 00:39:58.716 bw ( KiB/s): min=23248, max=24032, per=24.94%, avg=23706.67, stdev=275.39, samples=9 00:39:58.716 iops : min= 2906, max= 3004, avg=2963.33, stdev=34.42, samples=9 00:39:58.716 lat (msec) : 2=0.18%, 4=99.28%, 10=0.54% 00:39:58.716 cpu : usr=96.34%, sys=3.44%, ctx=6, majf=0, minf=105 00:39:58.716 IO depths : 1=0.1%, 2=0.1%, 4=73.8%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:58.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.716 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.716 issued rwts: total=14801,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:58.716 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:58.716 filename1: (groupid=0, jobs=1): err= 0: pid=1335001: Tue Nov 19 11:07:37 2024 00:39:58.716 read: IOPS=3072, BW=24.0MiB/s (25.2MB/s)(120MiB/5002msec) 00:39:58.716 slat (nsec): min=5396, max=75051, avg=5755.28, stdev=1367.65 00:39:58.716 clat (usec): min=1357, max=4415, avg=2589.81, stdev=316.94 00:39:58.716 lat (usec): min=1363, max=4420, avg=2595.56, stdev=316.99 00:39:58.716 clat percentiles (usec): 00:39:58.716 | 1.00th=[ 1876], 5.00th=[ 2089], 10.00th=[ 2212], 20.00th=[ 2376], 00:39:58.716 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2671], 60.00th=[ 2671], 00:39:58.716 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2835], 95.00th=[ 3228], 00:39:58.716 | 99.00th=[ 3621], 99.50th=[ 3851], 99.90th=[ 4228], 99.95th=[ 4359], 00:39:58.716 | 99.99th=[ 4424] 00:39:58.716 bw ( KiB/s): min=24176, max=24976, per=25.85%, avg=24572.44, stdev=262.03, samples=9 00:39:58.716 iops : min= 3022, max= 3122, avg=3071.56, stdev=32.75, samples=9 00:39:58.716 lat (msec) : 2=2.93%, 4=96.78%, 10=0.29% 00:39:58.716 cpu : usr=97.34%, sys=2.42%, ctx=7, majf=0, minf=81 00:39:58.716 IO depths : 1=0.1%, 2=0.1%, 4=68.1%, 8=31.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:58.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.716 complete : 0=0.0%, 4=95.9%, 8=4.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.716 issued rwts: total=15367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:58.716 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:58.716 00:39:58.716 Run status group 0 (all jobs): 00:39:58.716 READ: bw=92.8MiB/s (97.3MB/s), 22.5MiB/s-24.0MiB/s (23.6MB/s-25.2MB/s), io=464MiB (487MB), run=5001-5002msec 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.716 00:39:58.716 real 0m24.529s 00:39:58.716 user 5m12.171s 00:39:58.716 sys 0m4.807s 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:58.716 11:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:58.716 ************************************ 00:39:58.716 END TEST fio_dif_rand_params 00:39:58.716 ************************************ 00:39:58.716 11:07:37 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:39:58.716 11:07:37 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:58.716 11:07:37 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:58.716 11:07:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:58.716 ************************************ 00:39:58.716 START TEST fio_dif_digest 00:39:58.716 ************************************ 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:58.716 bdev_null0 00:39:58.716 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:58.717 [2024-11-19 11:07:37.752128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:58.717 { 00:39:58.717 "params": { 00:39:58.717 "name": "Nvme$subsystem", 00:39:58.717 "trtype": "$TEST_TRANSPORT", 00:39:58.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:58.717 "adrfam": "ipv4", 00:39:58.717 "trsvcid": "$NVMF_PORT", 00:39:58.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:58.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:58.717 "hdgst": ${hdgst:-false}, 00:39:58.717 "ddgst": ${ddgst:-false} 00:39:58.717 }, 00:39:58.717 "method": "bdev_nvme_attach_controller" 00:39:58.717 } 00:39:58.717 EOF 00:39:58.717 )") 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:58.717 "params": { 00:39:58.717 "name": "Nvme0", 00:39:58.717 "trtype": "tcp", 00:39:58.717 "traddr": "10.0.0.2", 00:39:58.717 "adrfam": "ipv4", 00:39:58.717 "trsvcid": "4420", 00:39:58.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:58.717 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:58.717 "hdgst": true, 00:39:58.717 "ddgst": true 00:39:58.717 }, 00:39:58.717 "method": "bdev_nvme_attach_controller" 00:39:58.717 }' 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:58.717 11:07:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:59.285 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:59.285 ... 00:39:59.285 fio-3.35 00:39:59.285 Starting 3 threads 00:40:11.507 00:40:11.507 filename0: (groupid=0, jobs=1): err= 0: pid=1336418: Tue Nov 19 11:07:48 2024 00:40:11.507 read: IOPS=350, BW=43.8MiB/s (46.0MB/s)(440MiB/10045msec) 00:40:11.507 slat (nsec): min=5778, max=61337, avg=9026.47, stdev=2089.33 00:40:11.507 clat (usec): min=5101, max=49160, avg=8530.82, stdev=1602.88 00:40:11.507 lat (usec): min=5111, max=49169, avg=8539.84, stdev=1602.76 00:40:11.507 clat percentiles (usec): 00:40:11.507 | 1.00th=[ 6128], 5.00th=[ 6587], 10.00th=[ 6915], 20.00th=[ 7308], 00:40:11.507 | 30.00th=[ 7570], 40.00th=[ 7898], 50.00th=[ 8356], 60.00th=[ 8979], 00:40:11.507 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10552], 00:40:11.507 | 99.00th=[11207], 99.50th=[11469], 99.90th=[11731], 99.95th=[46400], 00:40:11.507 | 99.99th=[49021] 00:40:11.507 bw ( KiB/s): min=41216, max=48384, per=41.38%, avg=45068.80, stdev=2011.42, samples=20 00:40:11.507 iops : min= 322, max= 378, avg=352.10, stdev=15.71, samples=20 00:40:11.507 lat (msec) : 10=84.16%, 20=15.78%, 50=0.06% 00:40:11.507 cpu : usr=94.18%, sys=5.57%, ctx=35, majf=0, minf=62 00:40:11.507 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:11.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:11.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:11.507 issued rwts: total=3523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:11.507 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:11.507 filename0: (groupid=0, jobs=1): err= 0: pid=1336419: Tue Nov 19 11:07:48 2024 00:40:11.507 read: IOPS=163, BW=20.4MiB/s (21.4MB/s)(204MiB/10010msec) 00:40:11.507 slat (nsec): min=5772, max=31257, avg=6980.21, stdev=1640.07 00:40:11.507 clat (msec): min=7, max=132, avg=18.38, stdev=18.43 00:40:11.507 lat (msec): min=7, max=132, avg=18.39, stdev=18.43 00:40:11.507 clat percentiles (msec): 00:40:11.507 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 10], 00:40:11.507 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:40:11.507 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 52], 95.00th=[ 53], 00:40:11.507 | 99.00th=[ 92], 99.50th=[ 92], 99.90th=[ 132], 99.95th=[ 133], 00:40:11.507 | 99.99th=[ 133] 00:40:11.507 bw ( KiB/s): min=13056, max=29184, per=19.19%, avg=20897.68, stdev=4454.70, samples=19 00:40:11.507 iops : min= 102, max= 228, avg=163.26, stdev=34.80, samples=19 00:40:11.507 lat (msec) : 10=30.07%, 20=52.24%, 50=3.00%, 100=14.57%, 250=0.12% 00:40:11.507 cpu : usr=94.85%, sys=4.44%, ctx=914, majf=0, minf=145 00:40:11.507 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:11.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:11.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:11.507 issued rwts: total=1633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:11.507 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:11.507 filename0: (groupid=0, jobs=1): err= 0: pid=1336420: Tue Nov 19 11:07:48 2024 00:40:11.507 read: IOPS=338, BW=42.4MiB/s (44.4MB/s)(424MiB/10005msec) 00:40:11.507 slat (nsec): min=5790, max=31784, avg=6753.44, stdev=990.76 00:40:11.507 clat (usec): min=5024, max=51964, avg=8842.57, stdev=1811.37 00:40:11.507 lat (usec): min=5030, max=51996, avg=8849.32, stdev=1811.62 00:40:11.507 clat percentiles (usec): 00:40:11.507 | 1.00th=[ 6521], 5.00th=[ 6980], 10.00th=[ 7177], 20.00th=[ 7504], 00:40:11.507 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9241], 00:40:11.507 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:40:11.507 | 99.00th=[11731], 99.50th=[11863], 99.90th=[12649], 99.95th=[50594], 00:40:11.507 | 99.99th=[52167] 00:40:11.507 bw ( KiB/s): min=38912, max=46080, per=39.85%, avg=43398.74, stdev=1788.68, samples=19 00:40:11.507 iops : min= 304, max= 360, avg=339.05, stdev=13.97, samples=19 00:40:11.507 lat (msec) : 10=77.44%, 20=22.47%, 100=0.09% 00:40:11.507 cpu : usr=94.63%, sys=5.14%, ctx=16, majf=0, minf=192 00:40:11.507 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:11.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:11.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:11.507 issued rwts: total=3391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:11.507 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:11.507 00:40:11.507 Run status group 0 (all jobs): 00:40:11.507 READ: bw=106MiB/s (112MB/s), 20.4MiB/s-43.8MiB/s (21.4MB/s-46.0MB/s), io=1068MiB (1120MB), run=10005-10045msec 00:40:11.507 11:07:48 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:40:11.507 11:07:48 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:40:11.507 11:07:48 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:40:11.507 11:07:48 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:11.507 11:07:48 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:40:11.507 11:07:48 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:11.507 11:07:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.507 11:07:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:11.507 11:07:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.508 11:07:48 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:11.508 11:07:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.508 11:07:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:11.508 11:07:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.508 00:40:11.508 real 0m11.078s 00:40:11.508 user 0m42.446s 00:40:11.508 sys 0m1.824s 00:40:11.508 11:07:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:11.508 11:07:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:11.508 ************************************ 00:40:11.508 END TEST fio_dif_digest 00:40:11.508 ************************************ 00:40:11.508 11:07:48 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:40:11.508 11:07:48 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:40:11.508 11:07:48 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:11.508 11:07:48 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:40:11.508 11:07:48 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:11.508 11:07:48 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:40:11.508 11:07:48 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:11.508 11:07:48 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:11.508 rmmod nvme_tcp 00:40:11.508 rmmod nvme_fabrics 00:40:11.508 rmmod nvme_keyring 00:40:11.508 11:07:48 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:11.508 11:07:48 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:40:11.508 11:07:48 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:40:11.508 11:07:48 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1325948 ']' 00:40:11.508 11:07:48 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1325948 00:40:11.508 11:07:48 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1325948 ']' 00:40:11.508 11:07:48 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1325948 00:40:11.508 11:07:48 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:40:11.508 11:07:48 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:11.508 11:07:48 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1325948 00:40:11.508 11:07:48 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:11.508 11:07:48 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:11.508 11:07:48 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1325948' 00:40:11.508 killing process with pid 1325948 00:40:11.508 11:07:48 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1325948 00:40:11.508 11:07:48 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1325948 00:40:11.508 11:07:49 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:40:11.508 11:07:49 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:13.418 Waiting for block devices as requested 00:40:13.418 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:13.418 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:13.418 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:13.418 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:13.680 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:13.680 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:13.680 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:13.968 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:13.968 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:40:14.228 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:14.228 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:14.228 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:14.228 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:14.488 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:14.488 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:14.488 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:14.747 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:15.008 11:07:54 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:15.008 11:07:54 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:15.008 11:07:54 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:40:15.008 11:07:54 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:40:15.008 11:07:54 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:15.008 11:07:54 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:40:15.008 11:07:54 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:15.008 11:07:54 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:15.008 11:07:54 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:15.008 11:07:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:15.008 11:07:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:17.549 11:07:56 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:17.549 00:40:17.549 real 1m18.344s 00:40:17.549 user 8m2.837s 00:40:17.549 sys 0m22.020s 00:40:17.549 11:07:56 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:17.549 11:07:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:17.549 ************************************ 00:40:17.549 END TEST nvmf_dif 00:40:17.549 ************************************ 00:40:17.549 11:07:56 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:17.549 11:07:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:17.549 11:07:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:17.549 11:07:56 -- common/autotest_common.sh@10 -- # set +x 00:40:17.549 ************************************ 00:40:17.549 START TEST nvmf_abort_qd_sizes 00:40:17.549 ************************************ 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:17.550 * Looking for test storage... 00:40:17.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:17.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.550 --rc genhtml_branch_coverage=1 00:40:17.550 --rc genhtml_function_coverage=1 00:40:17.550 --rc genhtml_legend=1 00:40:17.550 --rc geninfo_all_blocks=1 00:40:17.550 --rc geninfo_unexecuted_blocks=1 00:40:17.550 00:40:17.550 ' 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:17.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.550 --rc genhtml_branch_coverage=1 00:40:17.550 --rc genhtml_function_coverage=1 00:40:17.550 --rc genhtml_legend=1 00:40:17.550 --rc geninfo_all_blocks=1 00:40:17.550 --rc geninfo_unexecuted_blocks=1 00:40:17.550 00:40:17.550 ' 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:17.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.550 --rc genhtml_branch_coverage=1 00:40:17.550 --rc genhtml_function_coverage=1 00:40:17.550 --rc genhtml_legend=1 00:40:17.550 --rc geninfo_all_blocks=1 00:40:17.550 --rc geninfo_unexecuted_blocks=1 00:40:17.550 00:40:17.550 ' 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:17.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.550 --rc genhtml_branch_coverage=1 00:40:17.550 --rc genhtml_function_coverage=1 00:40:17.550 --rc genhtml_legend=1 00:40:17.550 --rc geninfo_all_blocks=1 00:40:17.550 --rc geninfo_unexecuted_blocks=1 00:40:17.550 00:40:17.550 ' 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:17.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:17.550 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:17.551 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:17.551 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:17.551 11:07:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:17.551 11:07:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:17.551 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:17.551 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:17.551 11:07:56 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:40:17.551 11:07:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:25.685 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:25.685 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:40:25.685 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:25.686 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:25.686 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:25.686 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:25.686 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:25.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:25.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:40:25.686 00:40:25.686 --- 10.0.0.2 ping statistics --- 00:40:25.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:25.686 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:40:25.686 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:25.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:25.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:40:25.686 00:40:25.686 --- 10.0.0.1 ping statistics --- 00:40:25.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:25.687 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:40:25.687 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:25.687 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:40:25.687 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:40:25.687 11:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:28.233 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:28.233 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:28.233 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:28.233 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:28.233 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:28.233 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:28.233 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:28.233 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:28.233 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:28.233 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:28.233 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:28.233 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:28.233 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:28.233 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:28.233 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:28.233 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:28.233 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:40:28.804 11:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:28.804 11:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:28.804 11:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:28.804 11:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:28.804 11:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:28.804 11:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:28.804 11:08:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:40:28.804 11:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:28.804 11:08:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:28.804 11:08:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:28.804 11:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1345848 00:40:28.804 11:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1345848 00:40:28.804 11:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:40:28.804 11:08:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1345848 ']' 00:40:28.804 11:08:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:28.804 11:08:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:28.804 11:08:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:28.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:28.804 11:08:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:28.804 11:08:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:28.804 [2024-11-19 11:08:07.892695] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:40:28.804 [2024-11-19 11:08:07.892743] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:28.804 [2024-11-19 11:08:07.985362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:29.065 [2024-11-19 11:08:08.025336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:29.065 [2024-11-19 11:08:08.025373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:29.065 [2024-11-19 11:08:08.025381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:29.065 [2024-11-19 11:08:08.025388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:29.065 [2024-11-19 11:08:08.025394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:29.065 [2024-11-19 11:08:08.027199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:29.065 [2024-11-19 11:08:08.027287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:29.065 [2024-11-19 11:08:08.027402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:29.065 [2024-11-19 11:08:08.027402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:29.635 11:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:29.635 ************************************ 00:40:29.635 START TEST spdk_target_abort 00:40:29.635 ************************************ 00:40:29.635 11:08:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:40:29.635 11:08:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:40:29.635 11:08:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:40:29.635 11:08:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.635 11:08:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:30.205 spdk_targetn1 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:30.205 [2024-11-19 11:08:09.120679] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:30.205 [2024-11-19 11:08:09.173012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:30.205 11:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:30.205 [2024-11-19 11:08:09.367590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:496 len:8 PRP1 0x200004abe000 PRP2 0x0 00:40:30.205 [2024-11-19 11:08:09.367635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0041 p:1 m:0 dnr:0 00:40:30.205 [2024-11-19 11:08:09.383706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1040 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:40:30.205 [2024-11-19 11:08:09.383732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0083 p:1 m:0 dnr:0 00:40:30.465 [2024-11-19 11:08:09.422719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2312 len:8 PRP1 0x200004abe000 PRP2 0x0 00:40:30.465 [2024-11-19 11:08:09.422743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:33.755 Initializing NVMe Controllers 00:40:33.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:33.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:33.755 Initialization complete. Launching workers. 00:40:33.755 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11614, failed: 3 00:40:33.755 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2645, failed to submit 8972 00:40:33.755 success 734, unsuccessful 1911, failed 0 00:40:33.755 11:08:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:33.755 11:08:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:33.755 [2024-11-19 11:08:12.581992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:304 len:8 PRP1 0x200004e56000 PRP2 0x0 00:40:33.755 [2024-11-19 11:08:12.582035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:0035 p:1 m:0 dnr:0 00:40:33.755 [2024-11-19 11:08:12.611302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:1056 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:40:33.755 [2024-11-19 11:08:12.611326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:008c p:1 m:0 dnr:0 00:40:33.755 [2024-11-19 11:08:12.644406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:1816 len:8 PRP1 0x200004e44000 PRP2 0x0 00:40:33.755 [2024-11-19 11:08:12.644430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:00f6 p:1 m:0 dnr:0 00:40:33.755 [2024-11-19 11:08:12.660463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:2240 len:8 PRP1 0x200004e48000 PRP2 0x0 00:40:33.755 [2024-11-19 11:08:12.660485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:40:33.755 [2024-11-19 11:08:12.677303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:2656 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:40:33.755 [2024-11-19 11:08:12.677326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:40:33.755 [2024-11-19 11:08:12.685283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:2800 len:8 PRP1 0x200004e5e000 PRP2 0x0 00:40:33.755 [2024-11-19 11:08:12.685305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:40:33.755 [2024-11-19 11:08:12.709289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:3328 len:8 PRP1 0x200004e40000 PRP2 0x0 00:40:33.755 [2024-11-19 11:08:12.709311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:00b2 p:0 m:0 dnr:0 00:40:37.050 Initializing NVMe Controllers 00:40:37.050 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:37.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:37.050 Initialization complete. Launching workers. 00:40:37.050 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8502, failed: 7 00:40:37.050 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1230, failed to submit 7279 00:40:37.050 success 327, unsuccessful 903, failed 0 00:40:37.050 11:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:37.050 11:08:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:37.050 [2024-11-19 11:08:15.966327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:172 nsid:1 lba:3056 len:8 PRP1 0x200004afc000 PRP2 0x0 00:40:37.050 [2024-11-19 11:08:15.966357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:172 cdw0:0 sqhd:00e1 p:1 m:0 dnr:0 00:40:40.347 Initializing NVMe Controllers 00:40:40.347 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:40.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:40.347 Initialization complete. Launching workers. 00:40:40.347 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43645, failed: 1 00:40:40.347 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2767, failed to submit 40879 00:40:40.347 success 585, unsuccessful 2182, failed 0 00:40:40.347 11:08:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:40:40.347 11:08:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.347 11:08:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:40.347 11:08:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.347 11:08:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:40:40.347 11:08:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.347 11:08:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:41.730 11:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.730 11:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1345848 00:40:41.730 11:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1345848 ']' 00:40:41.730 11:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1345848 00:40:41.730 11:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:40:41.730 11:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:41.730 11:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1345848 00:40:41.730 11:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:41.730 11:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:41.730 11:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1345848' 00:40:41.730 killing process with pid 1345848 00:40:41.730 11:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1345848 00:40:41.730 11:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1345848 00:40:41.990 00:40:41.990 real 0m12.185s 00:40:41.990 user 0m49.780s 00:40:41.990 sys 0m1.969s 00:40:41.990 11:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:41.990 11:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:41.990 ************************************ 00:40:41.990 END TEST spdk_target_abort 00:40:41.990 ************************************ 00:40:41.990 11:08:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:40:41.990 11:08:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:41.990 11:08:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:41.990 11:08:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:41.990 ************************************ 00:40:41.990 START TEST kernel_target_abort 00:40:41.990 ************************************ 00:40:41.990 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:40:41.990 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:40:41.991 11:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:45.289 Waiting for block devices as requested 00:40:45.289 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:45.551 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:45.551 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:45.551 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:45.551 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:45.811 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:45.811 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:45.811 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:46.072 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:40:46.072 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:46.333 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:46.333 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:46.334 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:46.594 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:46.594 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:46.594 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:46.862 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:47.134 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:40:47.134 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:47.134 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:40:47.134 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:40:47.134 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:47.134 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:40:47.134 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:40:47.134 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:40:47.134 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:40:47.134 No valid GPT data, bailing 00:40:47.134 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:47.134 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:40:47.134 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:40:47.134 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:40:47.134 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:40:47.134 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:47.134 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:47.134 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:47.134 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:40:47.135 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:40:47.135 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:40:47.135 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:40:47.135 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:40:47.135 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:40:47.135 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:40:47.135 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:40:47.135 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:47.135 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:40:47.477 00:40:47.477 Discovery Log Number of Records 2, Generation counter 2 00:40:47.477 =====Discovery Log Entry 0====== 00:40:47.477 trtype: tcp 00:40:47.477 adrfam: ipv4 00:40:47.477 subtype: current discovery subsystem 00:40:47.477 treq: not specified, sq flow control disable supported 00:40:47.477 portid: 1 00:40:47.477 trsvcid: 4420 00:40:47.477 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:47.477 traddr: 10.0.0.1 00:40:47.477 eflags: none 00:40:47.477 sectype: none 00:40:47.477 =====Discovery Log Entry 1====== 00:40:47.477 trtype: tcp 00:40:47.477 adrfam: ipv4 00:40:47.477 subtype: nvme subsystem 00:40:47.477 treq: not specified, sq flow control disable supported 00:40:47.477 portid: 1 00:40:47.477 trsvcid: 4420 00:40:47.477 subnqn: nqn.2016-06.io.spdk:testnqn 00:40:47.477 traddr: 10.0.0.1 00:40:47.477 eflags: none 00:40:47.477 sectype: none 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:47.477 11:08:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:50.855 Initializing NVMe Controllers 00:40:50.855 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:50.855 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:50.855 Initialization complete. Launching workers. 00:40:50.855 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67039, failed: 0 00:40:50.855 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67039, failed to submit 0 00:40:50.855 success 0, unsuccessful 67039, failed 0 00:40:50.855 11:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:50.855 11:08:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:54.157 Initializing NVMe Controllers 00:40:54.157 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:54.157 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:54.157 Initialization complete. Launching workers. 00:40:54.157 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 118993, failed: 0 00:40:54.157 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29938, failed to submit 89055 00:40:54.157 success 0, unsuccessful 29938, failed 0 00:40:54.157 11:08:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:54.157 11:08:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:56.700 Initializing NVMe Controllers 00:40:56.700 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:56.700 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:56.700 Initialization complete. Launching workers. 00:40:56.700 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146026, failed: 0 00:40:56.700 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36542, failed to submit 109484 00:40:56.700 success 0, unsuccessful 36542, failed 0 00:40:56.700 11:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:40:56.700 11:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:40:56.700 11:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:40:56.700 11:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:56.700 11:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:56.700 11:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:56.700 11:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:56.700 11:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:40:56.700 11:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:40:56.700 11:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:59.998 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:59.998 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:59.998 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:59.998 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:59.998 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:59.998 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:00.259 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:00.259 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:00.259 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:00.259 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:00.259 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:00.259 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:00.259 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:00.259 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:00.259 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:00.259 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:02.173 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:41:02.434 00:41:02.434 real 0m20.336s 00:41:02.434 user 0m9.937s 00:41:02.434 sys 0m5.999s 00:41:02.434 11:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:02.434 11:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:02.434 ************************************ 00:41:02.434 END TEST kernel_target_abort 00:41:02.434 ************************************ 00:41:02.434 11:08:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:41:02.434 11:08:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:41:02.434 11:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:02.434 11:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:41:02.434 11:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:02.434 11:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:41:02.434 11:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:02.434 11:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:02.434 rmmod nvme_tcp 00:41:02.434 rmmod nvme_fabrics 00:41:02.434 rmmod nvme_keyring 00:41:02.434 11:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:02.434 11:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:41:02.434 11:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:41:02.434 11:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1345848 ']' 00:41:02.434 11:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1345848 00:41:02.434 11:08:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1345848 ']' 00:41:02.434 11:08:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1345848 00:41:02.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1345848) - No such process 00:41:02.434 11:08:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1345848 is not found' 00:41:02.434 Process with pid 1345848 is not found 00:41:02.434 11:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:41:02.434 11:08:41 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:05.736 Waiting for block devices as requested 00:41:05.736 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:05.736 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:05.997 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:05.997 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:05.997 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:06.258 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:06.258 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:06.258 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:06.520 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:06.520 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:06.781 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:06.781 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:06.781 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:07.042 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:07.042 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:07.042 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:07.303 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:07.564 11:08:46 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:07.564 11:08:46 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:07.564 11:08:46 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:41:07.564 11:08:46 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:41:07.564 11:08:46 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:07.564 11:08:46 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:41:07.564 11:08:46 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:07.564 11:08:46 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:07.564 11:08:46 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:07.564 11:08:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:07.564 11:08:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:09.474 11:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:09.474 00:41:09.474 real 0m52.438s 00:41:09.474 user 1m5.224s 00:41:09.474 sys 0m18.991s 00:41:09.474 11:08:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:09.474 11:08:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:09.474 ************************************ 00:41:09.474 END TEST nvmf_abort_qd_sizes 00:41:09.474 ************************************ 00:41:09.734 11:08:48 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:41:09.734 11:08:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:09.734 11:08:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:09.734 11:08:48 -- common/autotest_common.sh@10 -- # set +x 00:41:09.734 ************************************ 00:41:09.734 START TEST keyring_file 00:41:09.734 ************************************ 00:41:09.734 11:08:48 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:41:09.734 * Looking for test storage... 00:41:09.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:41:09.734 11:08:48 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:09.734 11:08:48 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:41:09.734 11:08:48 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:09.734 11:08:48 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@345 -- # : 1 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@353 -- # local d=1 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@355 -- # echo 1 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@353 -- # local d=2 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@355 -- # echo 2 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:09.734 11:08:48 keyring_file -- scripts/common.sh@368 -- # return 0 00:41:09.734 11:08:48 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:09.734 11:08:48 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:09.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.734 --rc genhtml_branch_coverage=1 00:41:09.734 --rc genhtml_function_coverage=1 00:41:09.734 --rc genhtml_legend=1 00:41:09.734 --rc geninfo_all_blocks=1 00:41:09.734 --rc geninfo_unexecuted_blocks=1 00:41:09.734 00:41:09.734 ' 00:41:09.734 11:08:48 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:09.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.734 --rc genhtml_branch_coverage=1 00:41:09.734 --rc genhtml_function_coverage=1 00:41:09.734 --rc genhtml_legend=1 00:41:09.734 --rc geninfo_all_blocks=1 00:41:09.734 --rc geninfo_unexecuted_blocks=1 00:41:09.734 00:41:09.734 ' 00:41:09.734 11:08:48 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:09.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.734 --rc genhtml_branch_coverage=1 00:41:09.734 --rc genhtml_function_coverage=1 00:41:09.734 --rc genhtml_legend=1 00:41:09.734 --rc geninfo_all_blocks=1 00:41:09.734 --rc geninfo_unexecuted_blocks=1 00:41:09.734 00:41:09.734 ' 00:41:09.734 11:08:48 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:09.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.734 --rc genhtml_branch_coverage=1 00:41:09.734 --rc genhtml_function_coverage=1 00:41:09.734 --rc genhtml_legend=1 00:41:09.734 --rc geninfo_all_blocks=1 00:41:09.734 --rc geninfo_unexecuted_blocks=1 00:41:09.734 00:41:09.734 ' 00:41:09.734 11:08:48 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:41:09.734 11:08:48 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:09.734 11:08:48 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:41:09.734 11:08:48 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:09.734 11:08:48 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:09.734 11:08:48 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:09.734 11:08:48 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:09.734 11:08:48 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:09.734 11:08:48 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:09.734 11:08:48 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:09.734 11:08:48 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:09.734 11:08:48 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:09.734 11:08:48 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:09.995 11:08:48 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:41:09.995 11:08:48 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:09.995 11:08:48 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:09.995 11:08:48 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:09.995 11:08:48 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:09.995 11:08:48 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:09.995 11:08:48 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:09.995 11:08:48 keyring_file -- paths/export.sh@5 -- # export PATH 00:41:09.995 11:08:48 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@51 -- # : 0 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:09.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:09.995 11:08:48 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:41:09.995 11:08:48 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:41:09.995 11:08:48 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:41:09.995 11:08:48 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:41:09.995 11:08:48 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:41:09.995 11:08:48 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:41:09.995 11:08:48 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:41:09.995 11:08:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:09.995 11:08:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:41:09.995 11:08:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:09.995 11:08:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:09.995 11:08:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:09.995 11:08:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vdjs6GOvBk 00:41:09.995 11:08:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:41:09.995 11:08:48 keyring_file -- nvmf/common.sh@733 -- # python - 00:41:09.995 11:08:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vdjs6GOvBk 00:41:09.995 11:08:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vdjs6GOvBk 00:41:09.995 11:08:48 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.vdjs6GOvBk 00:41:09.995 11:08:49 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:41:09.995 11:08:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:09.995 11:08:49 keyring_file -- keyring/common.sh@17 -- # name=key1 00:41:09.995 11:08:49 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:41:09.995 11:08:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:09.995 11:08:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:09.995 11:08:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.WxI8tr2Rcn 00:41:09.995 11:08:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:41:09.995 11:08:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:41:09.995 11:08:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:41:09.995 11:08:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:09.995 11:08:49 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:41:09.995 11:08:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:41:09.995 11:08:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:41:09.995 11:08:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WxI8tr2Rcn 00:41:09.995 11:08:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.WxI8tr2Rcn 00:41:09.995 11:08:49 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.WxI8tr2Rcn 00:41:09.995 11:08:49 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:41:09.995 11:08:49 keyring_file -- keyring/file.sh@30 -- # tgtpid=1356013 00:41:09.995 11:08:49 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1356013 00:41:09.995 11:08:49 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1356013 ']' 00:41:09.995 11:08:49 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:09.995 11:08:49 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:09.995 11:08:49 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:09.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:09.995 11:08:49 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:09.995 11:08:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:09.995 [2024-11-19 11:08:49.101697] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:41:09.995 [2024-11-19 11:08:49.101755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1356013 ] 00:41:09.996 [2024-11-19 11:08:49.187186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:10.255 [2024-11-19 11:08:49.224069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:10.827 11:08:49 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:10.827 11:08:49 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:41:10.827 11:08:49 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:41:10.827 11:08:49 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:10.827 11:08:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:10.827 [2024-11-19 11:08:49.929109] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:10.827 null0 00:41:10.827 [2024-11-19 11:08:49.961169] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:10.827 [2024-11-19 11:08:49.961570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:10.827 11:08:49 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:10.827 11:08:49 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:10.827 11:08:49 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:41:10.827 11:08:49 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:10.827 11:08:49 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:41:10.827 11:08:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:10.827 11:08:49 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:41:10.827 11:08:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:10.827 11:08:49 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:10.827 11:08:49 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:10.827 11:08:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:10.827 [2024-11-19 11:08:49.993230] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:41:10.827 request: 00:41:10.827 { 00:41:10.827 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:41:10.827 "secure_channel": false, 00:41:10.827 "listen_address": { 00:41:10.827 "trtype": "tcp", 00:41:10.827 "traddr": "127.0.0.1", 00:41:10.827 "trsvcid": "4420" 00:41:10.827 }, 00:41:10.827 "method": "nvmf_subsystem_add_listener", 00:41:10.827 "req_id": 1 00:41:10.827 } 00:41:10.827 Got JSON-RPC error response 00:41:10.827 response: 00:41:10.827 { 00:41:10.827 "code": -32602, 00:41:10.827 "message": "Invalid parameters" 00:41:10.827 } 00:41:10.827 11:08:49 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:10.827 11:08:49 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:41:10.827 11:08:49 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:10.827 11:08:49 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:10.827 11:08:50 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:10.827 11:08:50 keyring_file -- keyring/file.sh@47 -- # bperfpid=1356085 00:41:10.827 11:08:50 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1356085 /var/tmp/bperf.sock 00:41:10.827 11:08:50 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1356085 ']' 00:41:10.827 11:08:50 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:41:10.827 11:08:50 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:10.827 11:08:50 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:10.827 11:08:50 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:10.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:10.827 11:08:50 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:10.827 11:08:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:11.089 [2024-11-19 11:08:50.066896] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:41:11.089 [2024-11-19 11:08:50.066968] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1356085 ] 00:41:11.089 [2024-11-19 11:08:50.162098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:11.089 [2024-11-19 11:08:50.215952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:12.033 11:08:50 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:12.033 11:08:50 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:41:12.033 11:08:50 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vdjs6GOvBk 00:41:12.033 11:08:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vdjs6GOvBk 00:41:12.033 11:08:51 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.WxI8tr2Rcn 00:41:12.033 11:08:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.WxI8tr2Rcn 00:41:12.033 11:08:51 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:41:12.033 11:08:51 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:41:12.033 11:08:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:12.033 11:08:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:12.033 11:08:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:12.294 11:08:51 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.vdjs6GOvBk == \/\t\m\p\/\t\m\p\.\v\d\j\s\6\G\O\v\B\k ]] 00:41:12.294 11:08:51 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:41:12.294 11:08:51 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:41:12.294 11:08:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:12.294 11:08:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:12.294 11:08:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:12.555 11:08:51 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.WxI8tr2Rcn == \/\t\m\p\/\t\m\p\.\W\x\I\8\t\r\2\R\c\n ]] 00:41:12.555 11:08:51 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:41:12.555 11:08:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:12.555 11:08:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:12.555 11:08:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:12.555 11:08:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:12.555 11:08:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:12.555 11:08:51 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:41:12.555 11:08:51 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:41:12.555 11:08:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:12.555 11:08:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:12.555 11:08:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:12.555 11:08:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:12.555 11:08:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:12.814 11:08:51 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:41:12.814 11:08:51 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:12.814 11:08:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:13.076 [2024-11-19 11:08:52.060534] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:13.076 nvme0n1 00:41:13.076 11:08:52 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:41:13.076 11:08:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:13.076 11:08:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:13.076 11:08:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:13.076 11:08:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:13.076 11:08:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:13.337 11:08:52 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:41:13.337 11:08:52 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:41:13.337 11:08:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:13.337 11:08:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:13.337 11:08:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:13.337 11:08:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:13.337 11:08:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:13.598 11:08:52 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:41:13.598 11:08:52 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:13.598 Running I/O for 1 seconds... 00:41:14.537 17920.00 IOPS, 70.00 MiB/s 00:41:14.537 Latency(us) 00:41:14.537 [2024-11-19T10:08:53.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:14.537 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:41:14.537 nvme0n1 : 1.00 17979.03 70.23 0.00 0.00 7106.32 3795.63 15400.96 00:41:14.537 [2024-11-19T10:08:53.732Z] =================================================================================================================== 00:41:14.537 [2024-11-19T10:08:53.732Z] Total : 17979.03 70.23 0.00 0.00 7106.32 3795.63 15400.96 00:41:14.537 { 00:41:14.537 "results": [ 00:41:14.537 { 00:41:14.537 "job": "nvme0n1", 00:41:14.537 "core_mask": "0x2", 00:41:14.537 "workload": "randrw", 00:41:14.537 "percentage": 50, 00:41:14.537 "status": "finished", 00:41:14.537 "queue_depth": 128, 00:41:14.537 "io_size": 4096, 00:41:14.537 "runtime": 1.003836, 00:41:14.537 "iops": 17979.03243159241, 00:41:14.537 "mibps": 70.23059543590786, 00:41:14.537 "io_failed": 0, 00:41:14.537 "io_timeout": 0, 00:41:14.537 "avg_latency_us": 7106.317919621751, 00:41:14.537 "min_latency_us": 3795.6266666666666, 00:41:14.537 "max_latency_us": 15400.96 00:41:14.537 } 00:41:14.537 ], 00:41:14.537 "core_count": 1 00:41:14.537 } 00:41:14.537 11:08:53 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:14.537 11:08:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:14.797 11:08:53 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:41:14.797 11:08:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:14.797 11:08:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:14.797 11:08:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:14.797 11:08:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:14.797 11:08:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:15.057 11:08:54 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:41:15.057 11:08:54 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:41:15.057 11:08:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:15.057 11:08:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:15.057 11:08:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:15.057 11:08:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:15.057 11:08:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:15.057 11:08:54 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:41:15.057 11:08:54 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:15.057 11:08:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:41:15.057 11:08:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:15.057 11:08:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:41:15.057 11:08:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:15.057 11:08:54 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:41:15.057 11:08:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:15.057 11:08:54 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:15.057 11:08:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:15.318 [2024-11-19 11:08:54.353881] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:41:15.318 [2024-11-19 11:08:54.354834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x811c10 (107): Transport endpoint is not connected 00:41:15.318 [2024-11-19 11:08:54.355830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x811c10 (9): Bad file descriptor 00:41:15.318 [2024-11-19 11:08:54.356832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:41:15.318 [2024-11-19 11:08:54.356839] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:41:15.318 [2024-11-19 11:08:54.356845] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:41:15.318 [2024-11-19 11:08:54.356852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:41:15.318 request: 00:41:15.318 { 00:41:15.318 "name": "nvme0", 00:41:15.318 "trtype": "tcp", 00:41:15.318 "traddr": "127.0.0.1", 00:41:15.318 "adrfam": "ipv4", 00:41:15.318 "trsvcid": "4420", 00:41:15.319 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:15.319 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:15.319 "prchk_reftag": false, 00:41:15.319 "prchk_guard": false, 00:41:15.319 "hdgst": false, 00:41:15.319 "ddgst": false, 00:41:15.319 "psk": "key1", 00:41:15.319 "allow_unrecognized_csi": false, 00:41:15.319 "method": "bdev_nvme_attach_controller", 00:41:15.319 "req_id": 1 00:41:15.319 } 00:41:15.319 Got JSON-RPC error response 00:41:15.319 response: 00:41:15.319 { 00:41:15.319 "code": -5, 00:41:15.319 "message": "Input/output error" 00:41:15.319 } 00:41:15.319 11:08:54 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:41:15.319 11:08:54 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:15.319 11:08:54 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:15.319 11:08:54 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:15.319 11:08:54 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:41:15.319 11:08:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:15.319 11:08:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:15.319 11:08:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:15.319 11:08:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:15.319 11:08:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:15.580 11:08:54 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:41:15.580 11:08:54 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:41:15.580 11:08:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:15.580 11:08:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:15.580 11:08:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:15.580 11:08:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:15.580 11:08:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:15.580 11:08:54 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:41:15.580 11:08:54 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:41:15.580 11:08:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:15.840 11:08:54 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:41:15.840 11:08:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:41:16.100 11:08:55 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:41:16.100 11:08:55 keyring_file -- keyring/file.sh@78 -- # jq length 00:41:16.100 11:08:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:16.100 11:08:55 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:41:16.100 11:08:55 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.vdjs6GOvBk 00:41:16.100 11:08:55 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.vdjs6GOvBk 00:41:16.100 11:08:55 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:41:16.100 11:08:55 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.vdjs6GOvBk 00:41:16.100 11:08:55 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:41:16.100 11:08:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:16.100 11:08:55 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:41:16.100 11:08:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:16.100 11:08:55 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vdjs6GOvBk 00:41:16.100 11:08:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vdjs6GOvBk 00:41:16.362 [2024-11-19 11:08:55.427622] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vdjs6GOvBk': 0100660 00:41:16.362 [2024-11-19 11:08:55.427643] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:41:16.362 request: 00:41:16.362 { 00:41:16.362 "name": "key0", 00:41:16.362 "path": "/tmp/tmp.vdjs6GOvBk", 00:41:16.362 "method": "keyring_file_add_key", 00:41:16.362 "req_id": 1 00:41:16.362 } 00:41:16.362 Got JSON-RPC error response 00:41:16.362 response: 00:41:16.362 { 00:41:16.362 "code": -1, 00:41:16.362 "message": "Operation not permitted" 00:41:16.362 } 00:41:16.362 11:08:55 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:41:16.362 11:08:55 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:16.362 11:08:55 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:16.362 11:08:55 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:16.362 11:08:55 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.vdjs6GOvBk 00:41:16.362 11:08:55 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vdjs6GOvBk 00:41:16.362 11:08:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vdjs6GOvBk 00:41:16.623 11:08:55 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.vdjs6GOvBk 00:41:16.623 11:08:55 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:41:16.623 11:08:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:16.623 11:08:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:16.623 11:08:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:16.623 11:08:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:16.623 11:08:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:16.884 11:08:55 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:41:16.884 11:08:55 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:16.884 11:08:55 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:41:16.884 11:08:55 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:16.884 11:08:55 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:41:16.884 11:08:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:16.884 11:08:55 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:41:16.884 11:08:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:16.884 11:08:55 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:16.884 11:08:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:16.884 [2024-11-19 11:08:55.989050] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.vdjs6GOvBk': No such file or directory 00:41:16.884 [2024-11-19 11:08:55.989066] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:41:16.884 [2024-11-19 11:08:55.989080] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:41:16.884 [2024-11-19 11:08:55.989085] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:41:16.884 [2024-11-19 11:08:55.989091] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:41:16.884 [2024-11-19 11:08:55.989096] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:41:16.884 request: 00:41:16.884 { 00:41:16.884 "name": "nvme0", 00:41:16.884 "trtype": "tcp", 00:41:16.884 "traddr": "127.0.0.1", 00:41:16.884 "adrfam": "ipv4", 00:41:16.884 "trsvcid": "4420", 00:41:16.884 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:16.884 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:16.884 "prchk_reftag": false, 00:41:16.884 "prchk_guard": false, 00:41:16.884 "hdgst": false, 00:41:16.884 "ddgst": false, 00:41:16.884 "psk": "key0", 00:41:16.884 "allow_unrecognized_csi": false, 00:41:16.884 "method": "bdev_nvme_attach_controller", 00:41:16.884 "req_id": 1 00:41:16.884 } 00:41:16.884 Got JSON-RPC error response 00:41:16.884 response: 00:41:16.884 { 00:41:16.884 "code": -19, 00:41:16.884 "message": "No such device" 00:41:16.884 } 00:41:16.884 11:08:56 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:41:16.884 11:08:56 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:16.884 11:08:56 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:16.884 11:08:56 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:16.884 11:08:56 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:41:16.884 11:08:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:17.145 11:08:56 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:41:17.145 11:08:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:17.145 11:08:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:41:17.145 11:08:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:17.145 11:08:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:17.145 11:08:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:17.145 11:08:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.A7ugFHP8hH 00:41:17.145 11:08:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:17.145 11:08:56 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:17.145 11:08:56 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:41:17.145 11:08:56 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:17.145 11:08:56 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:41:17.145 11:08:56 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:41:17.145 11:08:56 keyring_file -- nvmf/common.sh@733 -- # python - 00:41:17.145 11:08:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.A7ugFHP8hH 00:41:17.145 11:08:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.A7ugFHP8hH 00:41:17.145 11:08:56 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.A7ugFHP8hH 00:41:17.145 11:08:56 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.A7ugFHP8hH 00:41:17.145 11:08:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.A7ugFHP8hH 00:41:17.405 11:08:56 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:17.405 11:08:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:17.665 nvme0n1 00:41:17.665 11:08:56 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:41:17.665 11:08:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:17.665 11:08:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:17.665 11:08:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:17.665 11:08:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:17.665 11:08:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:17.665 11:08:56 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:41:17.665 11:08:56 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:41:17.665 11:08:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:17.924 11:08:56 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:41:17.924 11:08:56 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:41:17.924 11:08:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:17.924 11:08:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:17.924 11:08:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:18.185 11:08:57 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:41:18.185 11:08:57 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:41:18.185 11:08:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:18.185 11:08:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:18.185 11:08:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:18.185 11:08:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:18.185 11:08:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:18.185 11:08:57 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:41:18.185 11:08:57 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:18.185 11:08:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:18.445 11:08:57 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:41:18.445 11:08:57 keyring_file -- keyring/file.sh@105 -- # jq length 00:41:18.445 11:08:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:18.707 11:08:57 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:41:18.707 11:08:57 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.A7ugFHP8hH 00:41:18.707 11:08:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.A7ugFHP8hH 00:41:18.707 11:08:57 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.WxI8tr2Rcn 00:41:18.707 11:08:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.WxI8tr2Rcn 00:41:18.966 11:08:58 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:18.966 11:08:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:19.227 nvme0n1 00:41:19.227 11:08:58 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:41:19.227 11:08:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:41:19.489 11:08:58 keyring_file -- keyring/file.sh@113 -- # config='{ 00:41:19.489 "subsystems": [ 00:41:19.489 { 00:41:19.489 "subsystem": "keyring", 00:41:19.489 "config": [ 00:41:19.489 { 00:41:19.489 "method": "keyring_file_add_key", 00:41:19.489 "params": { 00:41:19.489 "name": "key0", 00:41:19.489 "path": "/tmp/tmp.A7ugFHP8hH" 00:41:19.489 } 00:41:19.489 }, 00:41:19.489 { 00:41:19.489 "method": "keyring_file_add_key", 00:41:19.489 "params": { 00:41:19.489 "name": "key1", 00:41:19.489 "path": "/tmp/tmp.WxI8tr2Rcn" 00:41:19.489 } 00:41:19.489 } 00:41:19.489 ] 00:41:19.489 }, 00:41:19.489 { 00:41:19.489 "subsystem": "iobuf", 00:41:19.489 "config": [ 00:41:19.489 { 00:41:19.489 "method": "iobuf_set_options", 00:41:19.489 "params": { 00:41:19.489 "small_pool_count": 8192, 00:41:19.489 "large_pool_count": 1024, 00:41:19.489 "small_bufsize": 8192, 00:41:19.489 "large_bufsize": 135168, 00:41:19.489 "enable_numa": false 00:41:19.489 } 00:41:19.489 } 00:41:19.489 ] 00:41:19.489 }, 00:41:19.489 { 00:41:19.489 "subsystem": "sock", 00:41:19.489 "config": [ 00:41:19.489 { 00:41:19.489 "method": "sock_set_default_impl", 00:41:19.489 "params": { 00:41:19.489 "impl_name": "posix" 00:41:19.489 } 00:41:19.489 }, 00:41:19.489 { 00:41:19.489 "method": "sock_impl_set_options", 00:41:19.489 "params": { 00:41:19.489 "impl_name": "ssl", 00:41:19.489 "recv_buf_size": 4096, 00:41:19.489 "send_buf_size": 4096, 00:41:19.489 "enable_recv_pipe": true, 00:41:19.489 "enable_quickack": false, 00:41:19.489 "enable_placement_id": 0, 00:41:19.489 "enable_zerocopy_send_server": true, 00:41:19.489 "enable_zerocopy_send_client": false, 00:41:19.489 "zerocopy_threshold": 0, 00:41:19.489 "tls_version": 0, 00:41:19.489 "enable_ktls": false 00:41:19.489 } 00:41:19.489 }, 00:41:19.489 { 00:41:19.489 "method": "sock_impl_set_options", 00:41:19.489 "params": { 00:41:19.489 "impl_name": "posix", 00:41:19.489 "recv_buf_size": 2097152, 00:41:19.489 "send_buf_size": 2097152, 00:41:19.489 "enable_recv_pipe": true, 00:41:19.489 "enable_quickack": false, 00:41:19.489 "enable_placement_id": 0, 00:41:19.489 "enable_zerocopy_send_server": true, 00:41:19.489 "enable_zerocopy_send_client": false, 00:41:19.489 "zerocopy_threshold": 0, 00:41:19.489 "tls_version": 0, 00:41:19.489 "enable_ktls": false 00:41:19.489 } 00:41:19.489 } 00:41:19.489 ] 00:41:19.489 }, 00:41:19.489 { 00:41:19.489 "subsystem": "vmd", 00:41:19.489 "config": [] 00:41:19.489 }, 00:41:19.489 { 00:41:19.489 "subsystem": "accel", 00:41:19.489 "config": [ 00:41:19.489 { 00:41:19.489 "method": "accel_set_options", 00:41:19.489 "params": { 00:41:19.489 "small_cache_size": 128, 00:41:19.489 "large_cache_size": 16, 00:41:19.489 "task_count": 2048, 00:41:19.489 "sequence_count": 2048, 00:41:19.489 "buf_count": 2048 00:41:19.489 } 00:41:19.489 } 00:41:19.489 ] 00:41:19.489 }, 00:41:19.489 { 00:41:19.489 "subsystem": "bdev", 00:41:19.489 "config": [ 00:41:19.489 { 00:41:19.489 "method": "bdev_set_options", 00:41:19.489 "params": { 00:41:19.489 "bdev_io_pool_size": 65535, 00:41:19.489 "bdev_io_cache_size": 256, 00:41:19.489 "bdev_auto_examine": true, 00:41:19.489 "iobuf_small_cache_size": 128, 00:41:19.489 "iobuf_large_cache_size": 16 00:41:19.489 } 00:41:19.489 }, 00:41:19.489 { 00:41:19.489 "method": "bdev_raid_set_options", 00:41:19.489 "params": { 00:41:19.489 "process_window_size_kb": 1024, 00:41:19.489 "process_max_bandwidth_mb_sec": 0 00:41:19.489 } 00:41:19.489 }, 00:41:19.489 { 00:41:19.489 "method": "bdev_iscsi_set_options", 00:41:19.489 "params": { 00:41:19.489 "timeout_sec": 30 00:41:19.489 } 00:41:19.489 }, 00:41:19.489 { 00:41:19.489 "method": "bdev_nvme_set_options", 00:41:19.489 "params": { 00:41:19.489 "action_on_timeout": "none", 00:41:19.489 "timeout_us": 0, 00:41:19.489 "timeout_admin_us": 0, 00:41:19.489 "keep_alive_timeout_ms": 10000, 00:41:19.489 "arbitration_burst": 0, 00:41:19.489 "low_priority_weight": 0, 00:41:19.489 "medium_priority_weight": 0, 00:41:19.489 "high_priority_weight": 0, 00:41:19.489 "nvme_adminq_poll_period_us": 10000, 00:41:19.489 "nvme_ioq_poll_period_us": 0, 00:41:19.489 "io_queue_requests": 512, 00:41:19.489 "delay_cmd_submit": true, 00:41:19.489 "transport_retry_count": 4, 00:41:19.489 "bdev_retry_count": 3, 00:41:19.489 "transport_ack_timeout": 0, 00:41:19.489 "ctrlr_loss_timeout_sec": 0, 00:41:19.489 "reconnect_delay_sec": 0, 00:41:19.489 "fast_io_fail_timeout_sec": 0, 00:41:19.489 "disable_auto_failback": false, 00:41:19.489 "generate_uuids": false, 00:41:19.489 "transport_tos": 0, 00:41:19.489 "nvme_error_stat": false, 00:41:19.489 "rdma_srq_size": 0, 00:41:19.489 "io_path_stat": false, 00:41:19.489 "allow_accel_sequence": false, 00:41:19.489 "rdma_max_cq_size": 0, 00:41:19.489 "rdma_cm_event_timeout_ms": 0, 00:41:19.489 "dhchap_digests": [ 00:41:19.489 "sha256", 00:41:19.489 "sha384", 00:41:19.489 "sha512" 00:41:19.489 ], 00:41:19.489 "dhchap_dhgroups": [ 00:41:19.489 "null", 00:41:19.489 "ffdhe2048", 00:41:19.489 "ffdhe3072", 00:41:19.489 "ffdhe4096", 00:41:19.489 "ffdhe6144", 00:41:19.489 "ffdhe8192" 00:41:19.489 ] 00:41:19.489 } 00:41:19.489 }, 00:41:19.489 { 00:41:19.489 "method": "bdev_nvme_attach_controller", 00:41:19.489 "params": { 00:41:19.489 "name": "nvme0", 00:41:19.489 "trtype": "TCP", 00:41:19.489 "adrfam": "IPv4", 00:41:19.489 "traddr": "127.0.0.1", 00:41:19.489 "trsvcid": "4420", 00:41:19.489 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:19.489 "prchk_reftag": false, 00:41:19.489 "prchk_guard": false, 00:41:19.489 "ctrlr_loss_timeout_sec": 0, 00:41:19.489 "reconnect_delay_sec": 0, 00:41:19.489 "fast_io_fail_timeout_sec": 0, 00:41:19.489 "psk": "key0", 00:41:19.490 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:19.490 "hdgst": false, 00:41:19.490 "ddgst": false, 00:41:19.490 "multipath": "multipath" 00:41:19.490 } 00:41:19.490 }, 00:41:19.490 { 00:41:19.490 "method": "bdev_nvme_set_hotplug", 00:41:19.490 "params": { 00:41:19.490 "period_us": 100000, 00:41:19.490 "enable": false 00:41:19.490 } 00:41:19.490 }, 00:41:19.490 { 00:41:19.490 "method": "bdev_wait_for_examine" 00:41:19.490 } 00:41:19.490 ] 00:41:19.490 }, 00:41:19.490 { 00:41:19.490 "subsystem": "nbd", 00:41:19.490 "config": [] 00:41:19.490 } 00:41:19.490 ] 00:41:19.490 }' 00:41:19.490 11:08:58 keyring_file -- keyring/file.sh@115 -- # killprocess 1356085 00:41:19.490 11:08:58 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1356085 ']' 00:41:19.490 11:08:58 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1356085 00:41:19.490 11:08:58 keyring_file -- common/autotest_common.sh@959 -- # uname 00:41:19.490 11:08:58 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:19.490 11:08:58 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1356085 00:41:19.490 11:08:58 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:19.490 11:08:58 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:19.490 11:08:58 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1356085' 00:41:19.490 killing process with pid 1356085 00:41:19.490 11:08:58 keyring_file -- common/autotest_common.sh@973 -- # kill 1356085 00:41:19.490 Received shutdown signal, test time was about 1.000000 seconds 00:41:19.490 00:41:19.490 Latency(us) 00:41:19.490 [2024-11-19T10:08:58.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:19.490 [2024-11-19T10:08:58.685Z] =================================================================================================================== 00:41:19.490 [2024-11-19T10:08:58.685Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:19.490 11:08:58 keyring_file -- common/autotest_common.sh@978 -- # wait 1356085 00:41:19.490 11:08:58 keyring_file -- keyring/file.sh@118 -- # bperfpid=1357895 00:41:19.490 11:08:58 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1357895 /var/tmp/bperf.sock 00:41:19.490 11:08:58 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1357895 ']' 00:41:19.490 11:08:58 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:19.490 11:08:58 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:19.490 11:08:58 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:41:19.490 11:08:58 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:19.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:19.490 11:08:58 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:19.490 11:08:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:19.490 11:08:58 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:41:19.490 "subsystems": [ 00:41:19.490 { 00:41:19.490 "subsystem": "keyring", 00:41:19.490 "config": [ 00:41:19.490 { 00:41:19.490 "method": "keyring_file_add_key", 00:41:19.490 "params": { 00:41:19.490 "name": "key0", 00:41:19.490 "path": "/tmp/tmp.A7ugFHP8hH" 00:41:19.490 } 00:41:19.490 }, 00:41:19.490 { 00:41:19.490 "method": "keyring_file_add_key", 00:41:19.490 "params": { 00:41:19.490 "name": "key1", 00:41:19.490 "path": "/tmp/tmp.WxI8tr2Rcn" 00:41:19.490 } 00:41:19.490 } 00:41:19.490 ] 00:41:19.490 }, 00:41:19.490 { 00:41:19.490 "subsystem": "iobuf", 00:41:19.490 "config": [ 00:41:19.490 { 00:41:19.490 "method": "iobuf_set_options", 00:41:19.490 "params": { 00:41:19.490 "small_pool_count": 8192, 00:41:19.490 "large_pool_count": 1024, 00:41:19.490 "small_bufsize": 8192, 00:41:19.490 "large_bufsize": 135168, 00:41:19.490 "enable_numa": false 00:41:19.490 } 00:41:19.490 } 00:41:19.490 ] 00:41:19.490 }, 00:41:19.490 { 00:41:19.490 "subsystem": "sock", 00:41:19.490 "config": [ 00:41:19.490 { 00:41:19.490 "method": "sock_set_default_impl", 00:41:19.490 "params": { 00:41:19.490 "impl_name": "posix" 00:41:19.490 } 00:41:19.490 }, 00:41:19.490 { 00:41:19.490 "method": "sock_impl_set_options", 00:41:19.490 "params": { 00:41:19.490 "impl_name": "ssl", 00:41:19.490 "recv_buf_size": 4096, 00:41:19.490 "send_buf_size": 4096, 00:41:19.490 "enable_recv_pipe": true, 00:41:19.490 "enable_quickack": false, 00:41:19.490 "enable_placement_id": 0, 00:41:19.490 "enable_zerocopy_send_server": true, 00:41:19.490 "enable_zerocopy_send_client": false, 00:41:19.490 "zerocopy_threshold": 0, 00:41:19.490 "tls_version": 0, 00:41:19.490 "enable_ktls": false 00:41:19.490 } 00:41:19.490 }, 00:41:19.490 { 00:41:19.490 "method": "sock_impl_set_options", 00:41:19.490 "params": { 00:41:19.490 "impl_name": "posix", 00:41:19.490 "recv_buf_size": 2097152, 00:41:19.490 "send_buf_size": 2097152, 00:41:19.490 "enable_recv_pipe": true, 00:41:19.490 "enable_quickack": false, 00:41:19.490 "enable_placement_id": 0, 00:41:19.490 "enable_zerocopy_send_server": true, 00:41:19.490 "enable_zerocopy_send_client": false, 00:41:19.490 "zerocopy_threshold": 0, 00:41:19.490 "tls_version": 0, 00:41:19.490 "enable_ktls": false 00:41:19.490 } 00:41:19.490 } 00:41:19.490 ] 00:41:19.490 }, 00:41:19.490 { 00:41:19.490 "subsystem": "vmd", 00:41:19.490 "config": [] 00:41:19.490 }, 00:41:19.490 { 00:41:19.490 "subsystem": "accel", 00:41:19.490 "config": [ 00:41:19.490 { 00:41:19.490 "method": "accel_set_options", 00:41:19.490 "params": { 00:41:19.490 "small_cache_size": 128, 00:41:19.490 "large_cache_size": 16, 00:41:19.490 "task_count": 2048, 00:41:19.490 "sequence_count": 2048, 00:41:19.490 "buf_count": 2048 00:41:19.490 } 00:41:19.490 } 00:41:19.490 ] 00:41:19.490 }, 00:41:19.490 { 00:41:19.490 "subsystem": "bdev", 00:41:19.490 "config": [ 00:41:19.490 { 00:41:19.490 "method": "bdev_set_options", 00:41:19.490 "params": { 00:41:19.490 "bdev_io_pool_size": 65535, 00:41:19.490 "bdev_io_cache_size": 256, 00:41:19.490 "bdev_auto_examine": true, 00:41:19.490 "iobuf_small_cache_size": 128, 00:41:19.490 "iobuf_large_cache_size": 16 00:41:19.490 } 00:41:19.490 }, 00:41:19.490 { 00:41:19.490 "method": "bdev_raid_set_options", 00:41:19.490 "params": { 00:41:19.490 "process_window_size_kb": 1024, 00:41:19.490 "process_max_bandwidth_mb_sec": 0 00:41:19.490 } 00:41:19.490 }, 00:41:19.490 { 00:41:19.490 "method": "bdev_iscsi_set_options", 00:41:19.490 "params": { 00:41:19.490 "timeout_sec": 30 00:41:19.490 } 00:41:19.490 }, 00:41:19.490 { 00:41:19.490 "method": "bdev_nvme_set_options", 00:41:19.490 "params": { 00:41:19.490 "action_on_timeout": "none", 00:41:19.490 "timeout_us": 0, 00:41:19.490 "timeout_admin_us": 0, 00:41:19.490 "keep_alive_timeout_ms": 10000, 00:41:19.490 "arbitration_burst": 0, 00:41:19.490 "low_priority_weight": 0, 00:41:19.490 "medium_priority_weight": 0, 00:41:19.490 "high_priority_weight": 0, 00:41:19.490 "nvme_adminq_poll_period_us": 10000, 00:41:19.490 "nvme_ioq_poll_period_us": 0, 00:41:19.490 "io_queue_requests": 512, 00:41:19.490 "delay_cmd_submit": true, 00:41:19.490 "transport_retry_count": 4, 00:41:19.490 "bdev_retry_count": 3, 00:41:19.490 "transport_ack_timeout": 0, 00:41:19.490 "ctrlr_loss_timeout_sec": 0, 00:41:19.490 "reconnect_delay_sec": 0, 00:41:19.490 "fast_io_fail_timeout_sec": 0, 00:41:19.490 "disable_auto_failback": false, 00:41:19.490 "generate_uuids": false, 00:41:19.490 "transport_tos": 0, 00:41:19.490 "nvme_error_stat": false, 00:41:19.490 "rdma_srq_size": 0, 00:41:19.490 "io_path_stat": false, 00:41:19.490 "allow_accel_sequence": false, 00:41:19.491 "rdma_max_cq_size": 0, 00:41:19.491 "rdma_cm_event_timeout_ms": 0, 00:41:19.491 "dhchap_digests": [ 00:41:19.491 "sha256", 00:41:19.491 "sha384", 00:41:19.491 "sha512" 00:41:19.491 ], 00:41:19.491 "dhchap_dhgroups": [ 00:41:19.491 "null", 00:41:19.491 "ffdhe2048", 00:41:19.491 "ffdhe3072", 00:41:19.491 "ffdhe4096", 00:41:19.491 "ffdhe6144", 00:41:19.491 "ffdhe8192" 00:41:19.491 ] 00:41:19.491 } 00:41:19.491 }, 00:41:19.491 { 00:41:19.491 "method": "bdev_nvme_attach_controller", 00:41:19.491 "params": { 00:41:19.491 "name": "nvme0", 00:41:19.491 "trtype": "TCP", 00:41:19.491 "adrfam": "IPv4", 00:41:19.491 "traddr": "127.0.0.1", 00:41:19.491 "trsvcid": "4420", 00:41:19.491 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:19.491 "prchk_reftag": false, 00:41:19.491 "prchk_guard": false, 00:41:19.491 "ctrlr_loss_timeout_sec": 0, 00:41:19.491 "reconnect_delay_sec": 0, 00:41:19.491 "fast_io_fail_timeout_sec": 0, 00:41:19.491 "psk": "key0", 00:41:19.491 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:19.491 "hdgst": false, 00:41:19.491 "ddgst": false, 00:41:19.491 "multipath": "multipath" 00:41:19.491 } 00:41:19.491 }, 00:41:19.491 { 00:41:19.491 "method": "bdev_nvme_set_hotplug", 00:41:19.491 "params": { 00:41:19.491 "period_us": 100000, 00:41:19.491 "enable": false 00:41:19.491 } 00:41:19.491 }, 00:41:19.491 { 00:41:19.491 "method": "bdev_wait_for_examine" 00:41:19.491 } 00:41:19.491 ] 00:41:19.491 }, 00:41:19.491 { 00:41:19.491 "subsystem": "nbd", 00:41:19.491 "config": [] 00:41:19.491 } 00:41:19.491 ] 00:41:19.491 }' 00:41:19.752 [2024-11-19 11:08:58.703783] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:41:19.752 [2024-11-19 11:08:58.703840] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1357895 ] 00:41:19.752 [2024-11-19 11:08:58.784886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:19.752 [2024-11-19 11:08:58.813856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:20.013 [2024-11-19 11:08:58.956480] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:20.586 11:08:59 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:20.586 11:08:59 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:41:20.586 11:08:59 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:41:20.586 11:08:59 keyring_file -- keyring/file.sh@121 -- # jq length 00:41:20.586 11:08:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:20.586 11:08:59 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:41:20.586 11:08:59 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:41:20.586 11:08:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:20.586 11:08:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:20.586 11:08:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:20.586 11:08:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:20.586 11:08:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:20.846 11:08:59 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:41:20.846 11:08:59 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:41:20.846 11:08:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:20.846 11:08:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:20.846 11:08:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:20.846 11:08:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:20.846 11:08:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:20.846 11:09:00 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:41:20.846 11:09:00 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:41:20.846 11:09:00 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:41:20.846 11:09:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:41:21.106 11:09:00 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:41:21.106 11:09:00 keyring_file -- keyring/file.sh@1 -- # cleanup 00:41:21.106 11:09:00 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.A7ugFHP8hH /tmp/tmp.WxI8tr2Rcn 00:41:21.106 11:09:00 keyring_file -- keyring/file.sh@20 -- # killprocess 1357895 00:41:21.106 11:09:00 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1357895 ']' 00:41:21.106 11:09:00 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1357895 00:41:21.106 11:09:00 keyring_file -- common/autotest_common.sh@959 -- # uname 00:41:21.106 11:09:00 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:21.106 11:09:00 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1357895 00:41:21.106 11:09:00 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:21.106 11:09:00 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:21.106 11:09:00 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1357895' 00:41:21.106 killing process with pid 1357895 00:41:21.106 11:09:00 keyring_file -- common/autotest_common.sh@973 -- # kill 1357895 00:41:21.106 Received shutdown signal, test time was about 1.000000 seconds 00:41:21.106 00:41:21.106 Latency(us) 00:41:21.106 [2024-11-19T10:09:00.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:21.106 [2024-11-19T10:09:00.301Z] =================================================================================================================== 00:41:21.106 [2024-11-19T10:09:00.301Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:41:21.106 11:09:00 keyring_file -- common/autotest_common.sh@978 -- # wait 1357895 00:41:21.367 11:09:00 keyring_file -- keyring/file.sh@21 -- # killprocess 1356013 00:41:21.367 11:09:00 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1356013 ']' 00:41:21.367 11:09:00 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1356013 00:41:21.367 11:09:00 keyring_file -- common/autotest_common.sh@959 -- # uname 00:41:21.367 11:09:00 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:21.367 11:09:00 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1356013 00:41:21.367 11:09:00 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:21.367 11:09:00 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:21.367 11:09:00 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1356013' 00:41:21.367 killing process with pid 1356013 00:41:21.367 11:09:00 keyring_file -- common/autotest_common.sh@973 -- # kill 1356013 00:41:21.367 11:09:00 keyring_file -- common/autotest_common.sh@978 -- # wait 1356013 00:41:21.628 00:41:21.628 real 0m11.912s 00:41:21.628 user 0m28.834s 00:41:21.628 sys 0m2.599s 00:41:21.628 11:09:00 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:21.628 11:09:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:21.628 ************************************ 00:41:21.628 END TEST keyring_file 00:41:21.628 ************************************ 00:41:21.628 11:09:00 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:41:21.628 11:09:00 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:41:21.628 11:09:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:21.628 11:09:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:21.628 11:09:00 -- common/autotest_common.sh@10 -- # set +x 00:41:21.628 ************************************ 00:41:21.628 START TEST keyring_linux 00:41:21.628 ************************************ 00:41:21.628 11:09:00 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:41:21.628 Joined session keyring: 488464413 00:41:21.628 * Looking for test storage... 00:41:21.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:41:21.628 11:09:00 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:21.628 11:09:00 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:41:21.628 11:09:00 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:21.890 11:09:00 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@345 -- # : 1 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@368 -- # return 0 00:41:21.890 11:09:00 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:21.890 11:09:00 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:21.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.890 --rc genhtml_branch_coverage=1 00:41:21.890 --rc genhtml_function_coverage=1 00:41:21.890 --rc genhtml_legend=1 00:41:21.890 --rc geninfo_all_blocks=1 00:41:21.890 --rc geninfo_unexecuted_blocks=1 00:41:21.890 00:41:21.890 ' 00:41:21.890 11:09:00 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:21.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.890 --rc genhtml_branch_coverage=1 00:41:21.890 --rc genhtml_function_coverage=1 00:41:21.890 --rc genhtml_legend=1 00:41:21.890 --rc geninfo_all_blocks=1 00:41:21.890 --rc geninfo_unexecuted_blocks=1 00:41:21.890 00:41:21.890 ' 00:41:21.890 11:09:00 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:21.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.890 --rc genhtml_branch_coverage=1 00:41:21.890 --rc genhtml_function_coverage=1 00:41:21.890 --rc genhtml_legend=1 00:41:21.890 --rc geninfo_all_blocks=1 00:41:21.890 --rc geninfo_unexecuted_blocks=1 00:41:21.890 00:41:21.890 ' 00:41:21.890 11:09:00 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:21.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.890 --rc genhtml_branch_coverage=1 00:41:21.890 --rc genhtml_function_coverage=1 00:41:21.890 --rc genhtml_legend=1 00:41:21.890 --rc geninfo_all_blocks=1 00:41:21.890 --rc geninfo_unexecuted_blocks=1 00:41:21.890 00:41:21.890 ' 00:41:21.890 11:09:00 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:41:21.890 11:09:00 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:21.890 11:09:00 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:21.890 11:09:00 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.890 11:09:00 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.890 11:09:00 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.890 11:09:00 keyring_linux -- paths/export.sh@5 -- # export PATH 00:41:21.890 11:09:00 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:21.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:21.890 11:09:00 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:41:21.890 11:09:00 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:41:21.890 11:09:00 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:41:21.890 11:09:00 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:41:21.890 11:09:00 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:41:21.890 11:09:00 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:41:21.890 11:09:00 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:41:21.890 11:09:00 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:41:21.890 11:09:00 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:41:21.890 11:09:00 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:21.890 11:09:00 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:41:21.890 11:09:00 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:41:21.890 11:09:00 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:21.890 11:09:00 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:41:21.891 11:09:00 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:41:21.891 11:09:00 keyring_linux -- nvmf/common.sh@733 -- # python - 00:41:21.891 11:09:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:41:21.891 11:09:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:41:21.891 /tmp/:spdk-test:key0 00:41:21.891 11:09:00 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:41:21.891 11:09:00 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:41:21.891 11:09:00 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:41:21.891 11:09:00 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:41:21.891 11:09:00 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:41:21.891 11:09:00 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:41:21.891 11:09:00 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:41:21.891 11:09:00 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:41:21.891 11:09:00 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:41:21.891 11:09:00 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:21.891 11:09:00 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:41:21.891 11:09:00 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:41:21.891 11:09:00 keyring_linux -- nvmf/common.sh@733 -- # python - 00:41:21.891 11:09:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:41:21.891 11:09:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:41:21.891 /tmp/:spdk-test:key1 00:41:21.891 11:09:01 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1358397 00:41:21.891 11:09:01 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1358397 00:41:21.891 11:09:01 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:41:21.891 11:09:01 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1358397 ']' 00:41:21.891 11:09:01 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:21.891 11:09:01 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:21.891 11:09:01 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:21.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:21.891 11:09:01 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:21.891 11:09:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:22.152 [2024-11-19 11:09:01.086172] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:41:22.152 [2024-11-19 11:09:01.086226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1358397 ] 00:41:22.152 [2024-11-19 11:09:01.171599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:22.153 [2024-11-19 11:09:01.202317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:22.724 11:09:01 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:22.724 11:09:01 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:41:22.724 11:09:01 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:41:22.724 11:09:01 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.724 11:09:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:22.724 [2024-11-19 11:09:01.891409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:22.724 null0 00:41:22.986 [2024-11-19 11:09:01.923463] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:22.986 [2024-11-19 11:09:01.923801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:22.986 11:09:01 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.986 11:09:01 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:41:22.986 899503310 00:41:22.986 11:09:01 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:41:22.986 650399315 00:41:22.986 11:09:01 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1358777 00:41:22.986 11:09:01 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1358777 /var/tmp/bperf.sock 00:41:22.986 11:09:01 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:41:22.986 11:09:01 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1358777 ']' 00:41:22.986 11:09:01 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:22.986 11:09:01 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:22.986 11:09:01 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:22.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:22.986 11:09:01 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:22.986 11:09:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:22.986 [2024-11-19 11:09:01.999841] Starting SPDK v25.01-pre git sha1 03b7aa9c7 / DPDK 24.03.0 initialization... 00:41:22.986 [2024-11-19 11:09:01.999889] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1358777 ] 00:41:22.986 [2024-11-19 11:09:02.081773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:22.986 [2024-11-19 11:09:02.111604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:23.928 11:09:02 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:23.928 11:09:02 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:41:23.928 11:09:02 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:41:23.928 11:09:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:41:23.928 11:09:02 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:41:23.928 11:09:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:41:24.191 11:09:03 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:41:24.191 11:09:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:41:24.191 [2024-11-19 11:09:03.327550] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:24.452 nvme0n1 00:41:24.452 11:09:03 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:41:24.452 11:09:03 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:41:24.452 11:09:03 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:41:24.452 11:09:03 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:41:24.452 11:09:03 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:41:24.452 11:09:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:24.452 11:09:03 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:41:24.452 11:09:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:41:24.452 11:09:03 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:41:24.452 11:09:03 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:41:24.452 11:09:03 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:24.452 11:09:03 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:41:24.452 11:09:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:24.715 11:09:03 keyring_linux -- keyring/linux.sh@25 -- # sn=899503310 00:41:24.715 11:09:03 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:41:24.715 11:09:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:41:24.715 11:09:03 keyring_linux -- keyring/linux.sh@26 -- # [[ 899503310 == \8\9\9\5\0\3\3\1\0 ]] 00:41:24.715 11:09:03 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 899503310 00:41:24.715 11:09:03 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:41:24.715 11:09:03 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:24.715 Running I/O for 1 seconds... 00:41:26.101 24091.00 IOPS, 94.11 MiB/s 00:41:26.101 Latency(us) 00:41:26.101 [2024-11-19T10:09:05.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:26.101 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:41:26.101 nvme0n1 : 1.01 24091.69 94.11 0.00 0.00 5297.20 3631.79 8246.61 00:41:26.101 [2024-11-19T10:09:05.296Z] =================================================================================================================== 00:41:26.101 [2024-11-19T10:09:05.296Z] Total : 24091.69 94.11 0.00 0.00 5297.20 3631.79 8246.61 00:41:26.101 { 00:41:26.101 "results": [ 00:41:26.101 { 00:41:26.101 "job": "nvme0n1", 00:41:26.101 "core_mask": "0x2", 00:41:26.101 "workload": "randread", 00:41:26.101 "status": "finished", 00:41:26.101 "queue_depth": 128, 00:41:26.101 "io_size": 4096, 00:41:26.101 "runtime": 1.005326, 00:41:26.101 "iops": 24091.687671461794, 00:41:26.101 "mibps": 94.10815496664763, 00:41:26.101 "io_failed": 0, 00:41:26.101 "io_timeout": 0, 00:41:26.101 "avg_latency_us": 5297.197379576109, 00:41:26.101 "min_latency_us": 3631.786666666667, 00:41:26.101 "max_latency_us": 8246.613333333333 00:41:26.101 } 00:41:26.101 ], 00:41:26.101 "core_count": 1 00:41:26.101 } 00:41:26.101 11:09:04 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:26.101 11:09:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:26.101 11:09:05 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:41:26.101 11:09:05 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:41:26.101 11:09:05 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:41:26.101 11:09:05 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:41:26.101 11:09:05 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:41:26.101 11:09:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:26.101 11:09:05 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:41:26.101 11:09:05 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:41:26.101 11:09:05 keyring_linux -- keyring/linux.sh@23 -- # return 00:41:26.101 11:09:05 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:26.101 11:09:05 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:41:26.101 11:09:05 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:26.101 11:09:05 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:41:26.101 11:09:05 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:26.101 11:09:05 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:41:26.101 11:09:05 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:26.101 11:09:05 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:26.101 11:09:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:26.362 [2024-11-19 11:09:05.422656] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:41:26.362 [2024-11-19 11:09:05.423286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e9480 (107): Transport endpoint is not connected 00:41:26.362 [2024-11-19 11:09:05.424283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e9480 (9): Bad file descriptor 00:41:26.362 [2024-11-19 11:09:05.425284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:41:26.362 [2024-11-19 11:09:05.425291] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:41:26.362 [2024-11-19 11:09:05.425298] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:41:26.362 [2024-11-19 11:09:05.425304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:41:26.362 request: 00:41:26.362 { 00:41:26.362 "name": "nvme0", 00:41:26.362 "trtype": "tcp", 00:41:26.362 "traddr": "127.0.0.1", 00:41:26.362 "adrfam": "ipv4", 00:41:26.362 "trsvcid": "4420", 00:41:26.362 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:26.362 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:26.362 "prchk_reftag": false, 00:41:26.362 "prchk_guard": false, 00:41:26.362 "hdgst": false, 00:41:26.362 "ddgst": false, 00:41:26.362 "psk": ":spdk-test:key1", 00:41:26.362 "allow_unrecognized_csi": false, 00:41:26.362 "method": "bdev_nvme_attach_controller", 00:41:26.362 "req_id": 1 00:41:26.362 } 00:41:26.362 Got JSON-RPC error response 00:41:26.362 response: 00:41:26.362 { 00:41:26.362 "code": -5, 00:41:26.362 "message": "Input/output error" 00:41:26.362 } 00:41:26.362 11:09:05 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:41:26.362 11:09:05 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:26.362 11:09:05 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:26.362 11:09:05 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:26.362 11:09:05 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:41:26.362 11:09:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:41:26.362 11:09:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:41:26.362 11:09:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:41:26.362 11:09:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:41:26.362 11:09:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:41:26.362 11:09:05 keyring_linux -- keyring/linux.sh@33 -- # sn=899503310 00:41:26.362 11:09:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 899503310 00:41:26.362 1 links removed 00:41:26.362 11:09:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:41:26.362 11:09:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:41:26.362 11:09:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:41:26.362 11:09:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:41:26.362 11:09:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:41:26.362 11:09:05 keyring_linux -- keyring/linux.sh@33 -- # sn=650399315 00:41:26.362 11:09:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 650399315 00:41:26.362 1 links removed 00:41:26.362 11:09:05 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1358777 00:41:26.362 11:09:05 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1358777 ']' 00:41:26.362 11:09:05 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1358777 00:41:26.362 11:09:05 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:41:26.362 11:09:05 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:26.362 11:09:05 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1358777 00:41:26.362 11:09:05 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:26.362 11:09:05 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:26.362 11:09:05 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1358777' 00:41:26.362 killing process with pid 1358777 00:41:26.362 11:09:05 keyring_linux -- common/autotest_common.sh@973 -- # kill 1358777 00:41:26.362 Received shutdown signal, test time was about 1.000000 seconds 00:41:26.362 00:41:26.362 Latency(us) 00:41:26.362 [2024-11-19T10:09:05.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:26.362 [2024-11-19T10:09:05.557Z] =================================================================================================================== 00:41:26.362 [2024-11-19T10:09:05.557Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:26.362 11:09:05 keyring_linux -- common/autotest_common.sh@978 -- # wait 1358777 00:41:26.623 11:09:05 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1358397 00:41:26.623 11:09:05 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1358397 ']' 00:41:26.623 11:09:05 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1358397 00:41:26.623 11:09:05 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:41:26.623 11:09:05 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:26.623 11:09:05 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1358397 00:41:26.623 11:09:05 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:26.623 11:09:05 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:26.623 11:09:05 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1358397' 00:41:26.623 killing process with pid 1358397 00:41:26.623 11:09:05 keyring_linux -- common/autotest_common.sh@973 -- # kill 1358397 00:41:26.623 11:09:05 keyring_linux -- common/autotest_common.sh@978 -- # wait 1358397 00:41:26.883 00:41:26.883 real 0m5.174s 00:41:26.883 user 0m9.638s 00:41:26.883 sys 0m1.408s 00:41:26.883 11:09:05 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:26.883 11:09:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:26.883 ************************************ 00:41:26.883 END TEST keyring_linux 00:41:26.883 ************************************ 00:41:26.883 11:09:05 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:41:26.883 11:09:05 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:41:26.883 11:09:05 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:41:26.883 11:09:05 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:41:26.883 11:09:05 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:41:26.883 11:09:05 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:41:26.883 11:09:05 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:41:26.883 11:09:05 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:41:26.883 11:09:05 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:41:26.883 11:09:05 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:41:26.883 11:09:05 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:41:26.883 11:09:05 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:41:26.883 11:09:05 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:41:26.883 11:09:05 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:41:26.883 11:09:05 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:41:26.883 11:09:05 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:41:26.883 11:09:05 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:41:26.883 11:09:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:26.883 11:09:05 -- common/autotest_common.sh@10 -- # set +x 00:41:26.883 11:09:05 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:41:26.883 11:09:05 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:41:26.883 11:09:05 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:41:26.883 11:09:05 -- common/autotest_common.sh@10 -- # set +x 00:41:35.028 INFO: APP EXITING 00:41:35.028 INFO: killing all VMs 00:41:35.028 INFO: killing vhost app 00:41:35.028 INFO: EXIT DONE 00:41:37.576 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:41:37.576 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:41:37.576 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:41:37.576 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:41:37.836 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:41:37.836 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:41:37.836 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:41:37.836 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:41:37.836 0000:65:00.0 (144d a80a): Already using the nvme driver 00:41:37.836 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:41:37.836 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:41:37.836 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:41:37.836 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:41:37.836 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:41:37.836 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:41:38.097 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:41:38.097 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:41:41.590 Cleaning 00:41:41.590 Removing: /var/run/dpdk/spdk0/config 00:41:41.590 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:41:41.590 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:41:41.590 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:41:41.590 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:41:41.590 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:41:41.590 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:41:41.590 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:41:41.851 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:41:41.851 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:41:41.851 Removing: /var/run/dpdk/spdk0/hugepage_info 00:41:41.851 Removing: /var/run/dpdk/spdk1/config 00:41:41.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:41:41.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:41:41.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:41:41.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:41:41.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:41:41.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:41:41.852 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:41:41.852 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:41:41.852 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:41:41.852 Removing: /var/run/dpdk/spdk1/hugepage_info 00:41:41.852 Removing: /var/run/dpdk/spdk2/config 00:41:41.852 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:41:41.852 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:41:41.852 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:41:41.852 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:41:41.852 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:41:41.852 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:41:41.852 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:41:41.852 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:41:41.852 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:41:41.852 Removing: /var/run/dpdk/spdk2/hugepage_info 00:41:41.852 Removing: /var/run/dpdk/spdk3/config 00:41:41.852 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:41:41.852 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:41:41.852 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:41:41.852 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:41:41.852 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:41:41.852 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:41:41.852 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:41:41.852 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:41:41.852 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:41:41.852 Removing: /var/run/dpdk/spdk3/hugepage_info 00:41:41.852 Removing: /var/run/dpdk/spdk4/config 00:41:41.852 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:41:41.852 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:41:41.852 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:41:41.852 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:41:41.852 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:41:41.852 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:41:41.852 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:41:41.852 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:41:41.852 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:41:41.852 Removing: /var/run/dpdk/spdk4/hugepage_info 00:41:41.852 Removing: /dev/shm/bdev_svc_trace.1 00:41:41.852 Removing: /dev/shm/nvmf_trace.0 00:41:41.852 Removing: /dev/shm/spdk_tgt_trace.pid780732 00:41:41.852 Removing: /var/run/dpdk/spdk0 00:41:41.852 Removing: /var/run/dpdk/spdk1 00:41:41.852 Removing: /var/run/dpdk/spdk2 00:41:41.852 Removing: /var/run/dpdk/spdk3 00:41:41.852 Removing: /var/run/dpdk/spdk4 00:41:41.852 Removing: /var/run/dpdk/spdk_pid1029629 00:41:41.852 Removing: /var/run/dpdk/spdk_pid1035127 00:41:41.852 Removing: /var/run/dpdk/spdk_pid1037029 00:41:41.852 Removing: /var/run/dpdk/spdk_pid1039155 00:41:41.852 Removing: /var/run/dpdk/spdk_pid1039493 00:41:41.852 Removing: /var/run/dpdk/spdk_pid1039840 00:41:41.852 Removing: /var/run/dpdk/spdk_pid1040181 00:41:41.852 Removing: /var/run/dpdk/spdk_pid1040895 00:41:41.852 Removing: /var/run/dpdk/spdk_pid1043057 00:41:41.852 Removing: /var/run/dpdk/spdk_pid1044324 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1044914 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1047565 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1048850 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1049738 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1054656 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1061306 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1061308 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1061310 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1065951 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1076161 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1080955 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1088385 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1089656 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1091507 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1093201 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1098834 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1104751 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1109780 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1118873 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1118884 00:41:42.112 Removing: /var/run/dpdk/spdk_pid1123934 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1124266 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1124601 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1124942 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1124947 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1130649 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1131155 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1136662 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1139899 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1146387 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1152927 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1163621 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1172104 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1172106 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1195063 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1195922 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1196628 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1197313 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1198372 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1199071 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1199753 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1200548 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1205804 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1206207 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1213742 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1214117 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1220591 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1225631 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1237233 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1237901 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1242963 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1243319 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1248347 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1255076 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1258261 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1270878 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1281564 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1283561 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1284566 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1304170 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1308889 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1312184 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1320371 00:41:42.113 Removing: /var/run/dpdk/spdk_pid1320420 00:41:42.373 Removing: /var/run/dpdk/spdk_pid1326300 00:41:42.373 Removing: /var/run/dpdk/spdk_pid1328519 00:41:42.373 Removing: /var/run/dpdk/spdk_pid1331010 00:41:42.373 Removing: /var/run/dpdk/spdk_pid1332236 00:41:42.373 Removing: /var/run/dpdk/spdk_pid1334742 00:41:42.373 Removing: /var/run/dpdk/spdk_pid1336115 00:41:42.373 Removing: /var/run/dpdk/spdk_pid1346157 00:41:42.373 Removing: /var/run/dpdk/spdk_pid1346664 00:41:42.373 Removing: /var/run/dpdk/spdk_pid1347233 00:41:42.373 Removing: /var/run/dpdk/spdk_pid1350166 00:41:42.373 Removing: /var/run/dpdk/spdk_pid1350841 00:41:42.373 Removing: /var/run/dpdk/spdk_pid1351357 00:41:42.373 Removing: /var/run/dpdk/spdk_pid1356013 00:41:42.373 Removing: /var/run/dpdk/spdk_pid1356085 00:41:42.373 Removing: /var/run/dpdk/spdk_pid1357895 00:41:42.373 Removing: /var/run/dpdk/spdk_pid1358397 00:41:42.373 Removing: /var/run/dpdk/spdk_pid1358777 00:41:42.373 Removing: /var/run/dpdk/spdk_pid779242 00:41:42.373 Removing: /var/run/dpdk/spdk_pid780732 00:41:42.373 Removing: /var/run/dpdk/spdk_pid781577 00:41:42.373 Removing: /var/run/dpdk/spdk_pid782620 00:41:42.373 Removing: /var/run/dpdk/spdk_pid782961 00:41:42.373 Removing: /var/run/dpdk/spdk_pid784022 00:41:42.373 Removing: /var/run/dpdk/spdk_pid784225 00:41:42.373 Removing: /var/run/dpdk/spdk_pid784497 00:41:42.373 Removing: /var/run/dpdk/spdk_pid785634 00:41:42.373 Removing: /var/run/dpdk/spdk_pid786392 00:41:42.373 Removing: /var/run/dpdk/spdk_pid786750 00:41:42.373 Removing: /var/run/dpdk/spdk_pid787098 00:41:42.373 Removing: /var/run/dpdk/spdk_pid787473 00:41:42.373 Removing: /var/run/dpdk/spdk_pid787776 00:41:42.373 Removing: /var/run/dpdk/spdk_pid788068 00:41:42.373 Removing: /var/run/dpdk/spdk_pid788419 00:41:42.373 Removing: /var/run/dpdk/spdk_pid788807 00:41:42.373 Removing: /var/run/dpdk/spdk_pid789873 00:41:42.373 Removing: /var/run/dpdk/spdk_pid793419 00:41:42.373 Removing: /var/run/dpdk/spdk_pid793747 00:41:42.373 Removing: /var/run/dpdk/spdk_pid794050 00:41:42.373 Removing: /var/run/dpdk/spdk_pid794227 00:41:42.373 Removing: /var/run/dpdk/spdk_pid794616 00:41:42.373 Removing: /var/run/dpdk/spdk_pid794933 00:41:42.373 Removing: /var/run/dpdk/spdk_pid795313 00:41:42.373 Removing: /var/run/dpdk/spdk_pid795531 00:41:42.373 Removing: /var/run/dpdk/spdk_pid795776 00:41:42.373 Removing: /var/run/dpdk/spdk_pid796025 00:41:42.373 Removing: /var/run/dpdk/spdk_pid796229 00:41:42.373 Removing: /var/run/dpdk/spdk_pid796399 00:41:42.373 Removing: /var/run/dpdk/spdk_pid796860 00:41:42.373 Removing: /var/run/dpdk/spdk_pid797198 00:41:42.373 Removing: /var/run/dpdk/spdk_pid797596 00:41:42.373 Removing: /var/run/dpdk/spdk_pid802277 00:41:42.373 Removing: /var/run/dpdk/spdk_pid808073 00:41:42.373 Removing: /var/run/dpdk/spdk_pid820176 00:41:42.373 Removing: /var/run/dpdk/spdk_pid820857 00:41:42.373 Removing: /var/run/dpdk/spdk_pid826268 00:41:42.373 Removing: /var/run/dpdk/spdk_pid826620 00:41:42.373 Removing: /var/run/dpdk/spdk_pid831694 00:41:42.373 Removing: /var/run/dpdk/spdk_pid838772 00:41:42.373 Removing: /var/run/dpdk/spdk_pid841938 00:41:42.373 Removing: /var/run/dpdk/spdk_pid854804 00:41:42.373 Removing: /var/run/dpdk/spdk_pid866001 00:41:42.373 Removing: /var/run/dpdk/spdk_pid868039 00:41:42.373 Removing: /var/run/dpdk/spdk_pid869258 00:41:42.634 Removing: /var/run/dpdk/spdk_pid890090 00:41:42.634 Removing: /var/run/dpdk/spdk_pid895050 00:41:42.634 Removing: /var/run/dpdk/spdk_pid951971 00:41:42.634 Removing: /var/run/dpdk/spdk_pid958392 00:41:42.634 Removing: /var/run/dpdk/spdk_pid965653 00:41:42.634 Removing: /var/run/dpdk/spdk_pid974013 00:41:42.634 Removing: /var/run/dpdk/spdk_pid974015 00:41:42.634 Removing: /var/run/dpdk/spdk_pid975022 00:41:42.634 Removing: /var/run/dpdk/spdk_pid976036 00:41:42.634 Removing: /var/run/dpdk/spdk_pid977068 00:41:42.634 Removing: /var/run/dpdk/spdk_pid977704 00:41:42.634 Removing: /var/run/dpdk/spdk_pid977807 00:41:42.634 Removing: /var/run/dpdk/spdk_pid978038 00:41:42.634 Removing: /var/run/dpdk/spdk_pid978275 00:41:42.634 Removing: /var/run/dpdk/spdk_pid978372 00:41:42.634 Removing: /var/run/dpdk/spdk_pid979371 00:41:42.634 Removing: /var/run/dpdk/spdk_pid980385 00:41:42.634 Removing: /var/run/dpdk/spdk_pid981393 00:41:42.634 Removing: /var/run/dpdk/spdk_pid982071 00:41:42.634 Removing: /var/run/dpdk/spdk_pid982084 00:41:42.634 Removing: /var/run/dpdk/spdk_pid982411 00:41:42.634 Removing: /var/run/dpdk/spdk_pid983854 00:41:42.634 Removing: /var/run/dpdk/spdk_pid985235 00:41:42.634 Removing: /var/run/dpdk/spdk_pid994938 00:41:42.634 Clean 00:41:42.634 11:09:21 -- common/autotest_common.sh@1453 -- # return 0 00:41:42.634 11:09:21 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:41:42.634 11:09:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:42.634 11:09:21 -- common/autotest_common.sh@10 -- # set +x 00:41:42.634 11:09:21 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:41:42.634 11:09:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:42.634 11:09:21 -- common/autotest_common.sh@10 -- # set +x 00:41:42.895 11:09:21 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:42.895 11:09:21 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:41:42.895 11:09:21 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:41:42.895 11:09:21 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:41:42.895 11:09:21 -- spdk/autotest.sh@398 -- # hostname 00:41:42.895 11:09:21 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:41:42.895 geninfo: WARNING: invalid characters removed from testname! 00:42:09.475 11:09:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:11.388 11:09:50 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:13.303 11:09:52 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:15.214 11:09:54 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:16.597 11:09:55 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:18.505 11:09:57 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:19.888 11:09:58 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:42:19.888 11:09:58 -- spdk/autorun.sh@1 -- $ timing_finish 00:42:19.888 11:09:58 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:42:19.888 11:09:58 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:42:19.888 11:09:58 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:42:19.888 11:09:58 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:19.888 + [[ -n 693827 ]] 00:42:19.888 + sudo kill 693827 00:42:19.899 [Pipeline] } 00:42:19.915 [Pipeline] // stage 00:42:19.920 [Pipeline] } 00:42:19.935 [Pipeline] // timeout 00:42:19.941 [Pipeline] } 00:42:19.955 [Pipeline] // catchError 00:42:19.960 [Pipeline] } 00:42:19.975 [Pipeline] // wrap 00:42:19.981 [Pipeline] } 00:42:19.994 [Pipeline] // catchError 00:42:20.004 [Pipeline] stage 00:42:20.006 [Pipeline] { (Epilogue) 00:42:20.020 [Pipeline] catchError 00:42:20.022 [Pipeline] { 00:42:20.035 [Pipeline] echo 00:42:20.037 Cleanup processes 00:42:20.043 [Pipeline] sh 00:42:20.332 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:20.333 1372238 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:20.347 [Pipeline] sh 00:42:20.636 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:20.636 ++ grep -v 'sudo pgrep' 00:42:20.636 ++ awk '{print $1}' 00:42:20.636 + sudo kill -9 00:42:20.636 + true 00:42:20.649 [Pipeline] sh 00:42:20.939 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:42:33.184 [Pipeline] sh 00:42:33.474 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:42:33.474 Artifacts sizes are good 00:42:33.491 [Pipeline] archiveArtifacts 00:42:33.499 Archiving artifacts 00:42:33.651 [Pipeline] sh 00:42:33.941 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:42:33.956 [Pipeline] cleanWs 00:42:33.967 [WS-CLEANUP] Deleting project workspace... 00:42:33.967 [WS-CLEANUP] Deferred wipeout is used... 00:42:33.974 [WS-CLEANUP] done 00:42:33.976 [Pipeline] } 00:42:33.994 [Pipeline] // catchError 00:42:34.008 [Pipeline] sh 00:42:34.403 + logger -p user.info -t JENKINS-CI 00:42:34.414 [Pipeline] } 00:42:34.429 [Pipeline] // stage 00:42:34.434 [Pipeline] } 00:42:34.450 [Pipeline] // node 00:42:34.455 [Pipeline] End of Pipeline 00:42:34.490 Finished: SUCCESS